The latest technology brings convenience to our lives,admiration and a lot of positive emotions, however, we must observe vigilance because of the possible negative consequences of using innovations. A list of the 7 most dangerous technological trends that have already shown their dark side is presented to your attention.
“Propaganda is the art of photographing a feature without hooves and horns.”
Hans Casper (1916 - 1990) is a German writer and author of radio shows.
In the world media is growing steadily number of programs like GROVER - systemartificial intelligence, capable of writing a fake news article on any topic. Such programs generate more believable articles than copywriters do. The secret is that AI processes large amounts of data and backs up its articles with facts that a person cannot always guess about.
The most successful in this direction has becomenonprofit company OpenAI supported by Elon Musk. OpenAI's results are so good that the organization initially decided not to publish the results of the research publicly to prevent dangerous misuse of the technology. The main danger of plausible fake news is that their quality can deceive even critically minded people who are not subject to brainwashing by propaganda.
The loudest example of negative usedrones was their recent attack on the oil platforms of Saudi Arabia, the damage from which amounted to hundreds of billions of dollars. As a result of the attack, global oil reserves were depleted by about 5%, but the convenience of using drones for military operations suggests that similar situations will continue.
A swarm of drones can be organized to achievetargets through interaction with each other, creating a new type of super-dangerous weapon. At the moment, the technology is still at the experimental stage, but the reality of the swarm, which will be able to coordinate its behavior to achieve the most complex military tasks, is approaching reality.
Smart Home Device Spying
So that smart home devices can respond torequests and be as useful as possible, they should be equipped with microphones for listening to voice commands. When a person installs a smart speaker in his room, he also signs that now in his house there is also a small scrupulous spy who does not miss a single word.
All smart devices collectinformation about habits and preferences, place of residence and routes of movement, time of arrival and departure from home. This information makes life more convenient, but there is also the possibility of data misuse. Thieves and scammers are actively working on tools that will allow them to take possession of all the information collected. In case of successful hacking of personal data from the cloud servers of Amazon, Google, Yandex or any other artificial intelligence platform, attackers will receive all the necessary information to blackmail or steal real things from home.
One of the biggest scandals in thisthe direction was Amazon employees listening to recordings of conversations of their customers with the Amazon Echo smart speaker this spring. Obviously, Amazon employees should not use this information for selfish purposes, but no one can guarantee this.
Face recognition with camcorders
Huawei smartphone maker last yearHe was accused of using face recognition technology for surveillance and racial profiling, as well as transferring access keys to foreign networks to Chinese intelligence.
Millions of cameras on smartphones and laptopsare used to track and recognize people not only in China - a similar practice has been seen in almost every country in the world. The only difference is that somewhere they managed to prove it (the Snowden case in the USA), but somewhere they didn’t.
Artificial intelligence is able to generatephrases in the voice of any person. For this, just one piece of the audio recording with his voice is enough. In a similar way, AI can create a video with any person that looks natural and believable. The negative consequences of such videos and audio recordings are vast: from hacking bank accounts to blackmail and political scandals.
Deepfake uses machine face mappingtraining and artificial intelligence to create patterns of behavior. The source data for generating the fake are various forms of a person's face, which are easier to get from a large array of records. Previously, such open-access data volumes were characteristic only of celebrities, but social networks have changed this. Now ordinary people upload gigabytes of amateur videos with them, providing the masters from Deepfake with all the necessary sources for processing.
The program created at the University of Berkeley is most successful in identifying fake videos.
AI and Phishing Viruses
Artificial intelligence greatly simplifiesthe work of phishing networks that need an effective tool to automate and scale the volume of work. AI helps attackers better find “weak” email addresses and accounts on social networks, more accurately compose texts of viral fishing rods, more efficiently bypass antivirus software and, most importantly, collect money in an automated mode without the risk of law enforcement tracking. In recent years, the withdrawal of funds has more often been carried out using cryptocurrencies.
Microelectromechanical Systems (MEMS)differ in microscopic dimensions up to sand grains. The smallest detail of smart dust is called a mot. A mot is a sensor that has its own computing node, sensors, power systems and data transmission. Motes can combine with other motes to form what is called smart dust. Such systems are already used by attackers on an industrial scale in cases where it is necessary to hide surveillance as much as possible.