How to protect yourself from AI-created deepfake audio scams involving your children

How to protect yourself from AI-created deepfake audio scams involving your children

[ad_1]

Evaluating the threat of crime warns European organizations in Europol that artificial intelligence helps accelerate the crime rate. The organization writes, "Amnesty International is mainly reshaping the organized crime scene. These technologies are automated and expanded, making them more developed and difficult to discover." What is in mind, a different form of the DeepFake attack is the willingness to use it in fraud and increase more consumers. First, some background.

Artificial intelligence is used by fraudsters to manipulate you through "social engineering"

Artificial intelligence is used more and more in online fraud that leads to people's treatment to take some measures due to persuasion, deception and intimidation. This is called social engineering. For example, if you receive a false text or email from your transportation companies, the fraudsters make it seem to be cut off if you don't pay it immediately. They know that you will do anything so that you do not lose your service and you have dealt with you in making a boost that you do not need to do. At the same time, they deceived you to reveal the personal data that is used to access your financial accounts and your hand.
Video mini image

Former cyber security expert Ivan Durnbush Forbes told artificial intelligence that the fraudsters can create a faster ratified messenger than ever. With artificial intelligence, the cost of creating these attacks is much lower. "Amnesty International reduces the costs of criminals, and the community needs new ways to either reduce their payments, or increase their operating budgets, or both," said Durnbush.

While most of you think about fake videos when you hear about "Deepfakes", the last battle revolves around the fake sound and use it in fraud. Imagine a phone call from your daughter that she is kept against her will and will not be released unless she pays a ransom. Although fraud like these have already succeeded without Amnesty International, you imagine to hear a fragile or afraid of your daughter, wife, or son, or even one of your parents tells you about their terrifying ordeal and asks you to pay anything that the wicked ask. With artificial intelligence, these calls can be created even if your relative has just heard who have been begging for their safe life in movies or home.

How to prevent your family from being deceived through a deep audio attack

Imagine you are sure that the caller was one of your children or another relative. You will be on your way to the bank at any time. The FBI has already released the public service announcement (I-120324-PSA service) warns people of these attacks. G-Men has a great idea to counter such attacks. The next time you talk to a member of his family, create a symbol that no one knows. Each person should only have a symbol that he knows only. In this way, if you receive a call from your daughter, you can confirm whether her voice really comes out through iPhone loudspeakers or the created DeepFake.

Don't put this because you never know when it might be a victim next to DeepFake. If all this looks familiar to you, For the first time, we reported the FBI's warning in December. But now with the involvement of Europol and the development of the voice Deepfakes, the anxiety level is higher.

The article was created in December, commenting from a reader who called a secret word "the most stupid and ridiculous advice." The author of the comment is anxious that at a time of tension, your relative may not remember the secret word, then ignore what might actually be a real "ransom" call. Although this may be a legitimate concern, you need to make the word code something that is easy to remember even when emphasizing it.

If you do not like the idea of ​​creating a symbol, listen to making sure that your relative's voice does not look automatic, and if you repeat some phrases outside the context, you are likely to listen to the voice of Deepfake.

the Honor Magic 7 Pro The smartphone was released in January with a special Deepfake discovery feature. On this feature, Honor wrote, "The discovery of Ai Deepfake has been trained through a large collection of data from videos and images related to online fraud, allowing Amnesty International to conduct identity, examination, and comparison within three seconds. If industrial or modified content is discovered through the feature, a risk warning is immediately released for the user for the user from those who are continuing with them to participate in the potential participation of their connections Possible. " Check our review of Magic 7 Pro.

[ad_2]
Download

Name is the most famous version in the series of publisher
Publisher
Genre News & Magazines
Version
Update March 23, 2025
Get it On Google Play
Rate this post
Download
Rate this post


Evaluating the threat of crime warns European organizations in Europol that artificial intelligence helps accelerate the crime rate. The organization writes, “Amnesty International is mainly reshaping the organized crime scene. These technologies are automated and expanded, making them more developed and difficult to discover.” What is in mind, a different form of the DeepFake attack is the willingness to use it in fraud and increase more consumers. First, some background.

Artificial intelligence is used by fraudsters to manipulate you through “social engineering”

Artificial intelligence is used more and more in online fraud that leads to people’s treatment to take some measures due to persuasion, deception and intimidation. This is called social engineering. For example, if you receive a false text or email from your transportation companies, the fraudsters make it seem to be cut off if you don’t pay it immediately. They know that you will do anything so that you do not lose your service and you have dealt with you in making a boost that you do not need to do. At the same time, they deceived you to reveal the personal data that is used to access your financial accounts and your hand.

Video mini image

Former cyber security expert Ivan Durnbush Forbes told artificial intelligence that the fraudsters can create a faster ratified messenger than ever. With artificial intelligence, the cost of creating these attacks is much lower. “Amnesty International reduces the costs of criminals, and the community needs new ways to either reduce their payments, or increase their operating budgets, or both,” said Durnbush.

While most of you think about fake videos when you hear about “Deepfakes”, the last battle revolves around the fake sound and use it in fraud. Imagine a phone call from your daughter that she is kept against her will and will not be released unless she pays a ransom. Although fraud like these have already succeeded without Amnesty International, you imagine to hear a fragile or afraid of your daughter, wife, or son, or even one of your parents tells you about their terrifying ordeal and asks you to pay anything that the wicked ask. With artificial intelligence, these calls can be created even if your relative has just heard who have been begging for their safe life in movies or home.

How to prevent your family from being deceived through a deep audio attack

Imagine you are sure that the caller was one of your children or another relative. You will be on your way to the bank at any time. The FBI has already released the public service announcement (I-120324-PSA service) warns people of these attacks. G-Men has a great idea to counter such attacks. The next time you talk to a member of his family, create a symbol that no one knows. Each person should only have a symbol that he knows only. In this way, if you receive a call from your daughter, you can confirm whether her voice really comes out through iPhone loudspeakers or the created DeepFake.

Don’t put this because you never know when it might be a victim next to DeepFake. If all this looks familiar to you, For the first time, we reported the FBI’s warning in December. But now with the involvement of Europol and the development of the voice Deepfakes, the anxiety level is higher.

The article was created in December, commenting from a reader who called a secret word “the most stupid and ridiculous advice.” The author of the comment is anxious that at a time of tension, your relative may not remember the secret word, then ignore what might actually be a real “ransom” call. Although this may be a legitimate concern, you need to make the word code something that is easy to remember even when emphasizing it.

If you do not like the idea of ​​creating a symbol, listen to making sure that your relative’s voice does not look automatic, and if you repeat some phrases outside the context, you are likely to listen to the voice of Deepfake.

the Honor Magic 7 Pro The smartphone was released in January with a special Deepfake discovery feature. On this feature, Honor wrote, “The discovery of Ai Deepfake has been trained through a large collection of data from videos and images related to online fraud, allowing Amnesty International to conduct identity, examination, and comparison within three seconds. If industrial or modified content is discovered through the feature, a risk warning is immediately released for the user for the user from those who are continuing with them to participate in the potential participation of their connections Possible. ” Check our review of Magic 7 Pro.



Download

 
Report

You are now ready to download for free. Here are some notes:

  • Please check our installation guide.
  • To check the CPU and GPU of Android device, please use CPU-Z app
Rate this post

Leave a Comment

Your email address will not be published. Required fields are marked *