The future of AI: egalitarian or dystopian?

Once upon a time, artificial intelligence (AI) was viewed as distant and unachievable — it was regarded as nothing more than a fantasy to furnish the plots of science fiction stories. We have made numerous breakthroughs since, with AI software now powerful enough to understand natural language, navigate unfamiliar terrain, and augment scientific research. As COVID-19 reduced our ability to interact with each other, we’ve seen AI powered machines step in to fill that void, and AI be used to advance medical research towards better treatments. This ubiquity of AI may only be the beginning, with experts projecting that AI could contribute a staggering $15.7 trillion to the global economy by the end of the decade. Unsurprisingly, many prosperous members of society view the future of AI optimistically, as one of ever increased efficiency and profit. Yet many on the other side of the spectrum look on much more apprehensively: AI may have inherited the best of human traits, our intelligence, but it also has inherited one of humanity’s worst: our bias and prejudice. AI — fraught with discrimination — is being used to perpetuate systemic inequalities. If we fail to overcome this, an AI dominated future would be bleak and dystopian. We would be moving forward in time, yet backwards in progress, accelerating mindlessly towards a less equitable society.

Towards dystopia is where we’re headed if we don’t reverse course. AI is increasingly being used to make influential decisions in people’s lives — decisions that are often biased. This is due to AI being trained on past data to make future decisions. This data can often have bias which is then inherited by the AI. For instance, AI hiring tools are often being used to assess job applicants. Trained on past employee data which consists of mostly men, the AI absorbs this bias and continues the cycle of disfavoring women, which perpetuates the lack of diversity in key industries such as tech. This is absolutely unacceptable, and that’s to say nothing of the many other ways AI can be used to reinforce inequality. Known as the ‘tech to prison pipeline’, AI — trained on historical criminal data — is being used in criminal justice to determine verdicts. However, African Americans are overrepresented in the training data, and as such, the AI has been shown to hand down harsher sentences for African Americans.

To move towards a future with AI that is not only intelligent, but fair, we must enact regulation to outlaw discriminatory uses, and ensure that the developers of AI software are diverse, so their perspectives are included in the software they created.

Perhaps counterintuitively, a world with fair AI will see social justice advanced even further than a world before any AI. The sole reason that AI has become unfair is due to humans themselves holding lots of bias — which AI has absorbed. But with fair AI replacing humans in decision making, by definition, we will be at a state of zero bias, and thus increased equality.

Achieving fair AI may be the key to a better future — one of increased economic prosperity, furthered scientific progress, and more equity. But in the meantime, we must be diligent in ensuring that the AI being used reflects the best of humanity, rather than our worst.

Modern Elections: Algorithms Changing The Political Process

The days of grassroots campaigning and political buttons are long gone. Candidates have found a new way of running, a new manager. Algorithms and artificial intelligence are quickly becoming the standard when it comes to the campaign trail. These predictive algorithms could be deciding the votes of millions using the information of potential voters.

Politicians are using AI to manipulate voters through targeted ads. Slava Polonski, PhD, explains how: “Using big data and machine learning, voters received different messages based on predictions about their susceptibility to different arguments.” Instead of going door to door, using the same message for each person, politicians use AI to create specific knocks they know people will answer to. This all takes place from a website or email.

People tagged as conservatives receive ads that reference family values and maintaining tradition. Voters more susceptible to conspiracy theories were shown ads based on fear, and they all could come from the same candidate.

The role of AI in campaigns doesn’t stop at ads. Indeed, in a post-mortem of Hillary Clinton’s 2016 campaign, the Washington Post revealed that the campaign was driven almost entirely by a ML algorithm called Ada. More specifically, the algorithm was said to “play a role in virtually every strategic decision Clinton aides made, including where and when to deploy the candidate and her battalion of surrogates and where to air television ads” (Berkowitz, 2021). After Clinton’s loss, questions arose as to the effectiveness of using AI in this setting for candidates. In 2020, both Biden and Trump stuck to AI for primarily advertising-based uses.

GoodWorkLabs

This has ushered in the utilization of bots and targeted swarms of misinformation to gain votes. Candidates are leading “ armies of bots to swarm social media to hide dissent. In fact, in an analysis on the role of technology in political discourse entering the 2020 election, The Atlantic found that, about a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote’” (Berkowitz, 2020). Individual votes are being influenced by social media accounts without a human being behind them. All over the globe, AI with an agenda can tip the scales of an election.

The use of social media campaigns with large-scale political propaganda is intertwined within elections and ultimately raises questions about our democracy, according to Dr. Vyacheslav Polonski, Network Scientist at the University of Oxford. Users are manipulated, receiving different messages based on predictions about their susceptibility to different arguments for different politicians. “Every voter could receive a tailored message that emphasizes a different side of the argument…The key was just finding the right emotional triggers for each person to drive them to action” (Polonski 2017).

The use of AI in elections raises much larger questions about the stability of the political system we live in. “A representative democracy depends on free and fair elections in which citizens can vote with their conscience, free of intimidation or manipulation. Yet for the first time ever, we are in real danger of undermining fair elections — if this technology continues to be used to manipulate voters and promote extremist narratives” (Polonski 2017)

However, the use of AI can also enhance election campaigns in ethical ways. As Polonski says, “we can program political bots to step in when people share articles that contain known misinformation [and] we can deploy micro-targeting campaigns that help to educate voters on a variety of political issues and enable them to make up their own minds.”

The ongoing use of social media readily informs citizens about elections, their representatives, and the issues occurring around them. Using AI in elections is critical as Polonski says, “…we can use AI to listen more carefully to what people have to say and make sure their voices are being clearly heard by their elected representatives”.

So while AI in elections raises many concerns regarding the future of campaigning and democracy, it has the potential to help constituents without manipulation when employed in the right setting.

AI is being used to enhance performance rates of your favorite athletes

When we think about the way artificial intelligence is used in sports, we have to look back to the past. Take Billy Beane, the general manager for the Oakland A’s, a professional major league baseball team that uses quantitative data to make predictions on what kind of players would be successful in the MLB for a low value. The strategy employed by Beane worked pretty well, as the A’s achieved their first playoff appearance in nearly 45 years. Bean received many accolades, and a movie about him, Moneyball. Fast forward years later, and we can see the likes of analytics and Artificial Intelligence being used in sports Industries across multiple Sports. Names like Theo Epstein (Chicago Cubs), Sam Presti (Oklahoma City Thunder), and Zinedane Zidane (Real Madrid) are pioneers who have used AI analytics to help them make decisions on trades, player acquisitions, drafting, and contract negotiations throughout the sports world. Apart from the perspective of general managers, and the way they use AI, artificial intelligence is employed to make more accurate decisions about sports rules and regulations, to protect player safety, or to improve player and athlete performance. Take a few examples, such as using an artificial intelligence catcher to show the audience during the game whether the empires call was correct or incorrect, computer vision algorithms employed during the NBA games to analyze player shot form, and perhaps more importantly the use of AI to analyze concussion impact and predict whether a force to the head has actually caused a head injury for NFL players. Examples such as the last one show the impact artificial intelligence can have on Sports, which can contribute to player safety and improved player recovery, improving the experience of both players and fans.

So how exactly do all of these things work? In sports, data is king. Athletes are executing hundreds of plays, or hundreds of different actions in a single game or a single season that allows treasure troves of data to then be analyzed by AI neural networks in order to make better predictions regarding players. In sports, there is a huge need for statisticians, and nearly every single statistic that is related to a certain sport is often recorded. Therefore, when thinking about the way AI is used in sports, the concepts of big data and distributed data is significantly important. For example, take sportlogiq, a Canadian startup that is focusing on providing broadcasters in the NHL with commentary generated by natural language processing neural networks in order to effectively broadcast better by comparing their broadcast to statistics and analytics of players historically. If a player is performing better than they typically do, the neural network will prompt the broadcaster to discuss it. In order for such a prediction to be made, the neural network will have needed to analyze mountains of data in regards to that specific player to make a better broadcast for sports announcers. Take Nike smart basketball, an analytic software that is often employed by NBA teams to improve player performance in the NBA. Nike analyzes every single bounce of a ball, and has been able to identify different segmentation points on a player’s finger to analyze exactly where they are dribbling the ball, how they grip the ball when they shoot, or even how they attempt a steal or palm the ball when taking it up the basketball court. The smaller specific data segmentation points are recorded thousands and thousands of times, and then Nike is providing constant feedback to the players and how to improve specific points of their game.

Both these examples contribute to the duality we are seeing with artificial intelligence in sports. This shows the future of sports and how powerful technology can be used to revolutionize the sports that we watch every day. There’s definitely a trend being utilized with artificial intelligence in sports, and a growing market is there for artificial intelligence sports companies. Thus can be a great field to get into and one that can be enjoyable for many.

The Shocking Carbon Footprint of AI Models

Credit: datascience.aero

In recent news, NFTs, or non-fungible tokens, have garnered attention for their environmental impact. NFTs are digital assets that represent something in the real world like an art piece. This has now brought attention to the environmental impact of things on the internet and technology in general. As of late, AI models have been shown to use up vast amounts of energy in order to train itself for it’s express purpose.

How much energy is being used?

During the training of the AI model, a team of researchers led by Emma Strubel at the University of Massachusetts Amherst noticed that the AI model they were training used exorbitant amounts of energy. For an AI model to work for its intended purpose, the model has to be trained through various tests depending on the type of model it is and its purpose. In this situation, the team of researchers calculated that the AI model they were trained used thirty-nine pounds of carbon dioxide before the model was fully trained for its purpose.

This is similar to the emissions that a car releases in its lifetime five times over. The University of Massachusetts Amherst concluded that the training of just one neural network accounts for “roughly 626,000 pounds of carbon dioxide” released into the atmosphere.

This AI model is one out of hundreds that are releasing mass amounts of emissions that harm the environment. AI models have been used in medicine with chatbots being used to identify symptoms in patients along with a learning algorithm, created by researchers at Google AI Healthcare, being trained to identify breast cancer. However, in the coming years the environmental effects may counteract the good the model is trying to do.

What is the real harm? And how does this happen?

This energy usage with thousands of AI models coupled with the Earth’s already rising climate crisis may cause our emissions to reach an all-time high. With 2020 as one of the hottest years on record, the emissions from these models only add to the problem of climate change. Especially with the growth of the tech field, it is alarming to see the high emission rates of algorithmic technology.

The demand for AI to solve multitudes of problems will continue to grow and with this comes more and more data. Strubel concludes that “training huge models on tons of data is not feasible for academics.” This is due to the lack of advanced computers that are better suited to process these mass amounts of data. With these advanced computers, it would help to synthesize and process the information with generally less carbon output, according to Strubel and her team.

She goes on to say that as time goes on it becomes less feasible for researchers and students to be able to process mass amounts of data without these computers. Often in order to make groundbreaking studies, it comes at the cost of the environment, which is why advanced computers are necessary in order to be able to continue to make progress in the field of AI.

What are the solutions?

Currently, the best solution as proposed by the researcher is to invest in faster and more efficient computers to process these mass amounts of information. Funding for this type of research would help the computers process these mass amounts of data. It would cut down on the energy usage of these computers and lessen the environmental impact of training the AI models.

Credits to ofa.mit.edu

At MIT, the researchers there were able to cut down on their energy usage by utilizing the computers donated to them by IBM. Because of these computers, the researchers were able to process millions of calculations and write important papers in the AI field.

Another solution at MIT is OFA networks or one-for-all networks. This OFA network is meant to be a network that is trained to “support versatile architectural configurations.” Instead of using loads of energy to work individually for these models it uses a general network and uses specific aspects from the general network to support the software. This network helps to cut down on the overall cost of these models.

Though there are concerns over whether this can compromise the accuracy of the system the researchers provided testing on this to see if it was true. It was not and they found that the OFA network had no effect on the AI systems.

With these solutions, it is important to understand our future is not hopeless. Researchers are actively looking at ways to alleviate this issue and by using the correct plans and actions, the innovations of the future can help to better, not harm.

Deep Fakes: A Threat to Truth

Business Insider

In most situations, it is assumed that video footage is fact. What is found on a security tape is indisputable evidence, and fiction is best left to cartoons and special effects. Deep fakes, or the use of artificial intelligence algorithms to change a person’s face to that of another, blur the lines of fact and fiction. What was once an easily dismissible fake news headline now is bolstered by video evidence. As the algorithms that create deep fakes become smarter, many question the consequences and of potentially slanderous deep fakes, and legislative approaches to mitigating their harm.

In 2020, a State Farm commercial played on ESPN’s The Last Dance. The commercial appeared to showcase a 1998 clip of an analyst from ESPN making an accurate prediction about the year 2020. This clip was a deep fake. The clip was generated with the help of artificial intelligence software. Viewers likely realized that the clip was fake, but they might have not considered the ethical implications of the video and subsequently, all deepfakes.

At the beginning of 2019, the number of deepfakes on the internet nearly doubled over the span of three months (Toews 2020). As artificial inteeligence technology continues to improve, this growth will continue. While some deep fakes, such as the doctored clip of the analyst, are lighthearted, malicious deep fakes pose a serious threat to society. One example of this is deep fakes in the political world. Deep fakes can be a powerful mechanism for destroying a public figure’s credibility by distorting their words, as well as spreading false information to the individuals who view them. Deep fakes can cause harm in a plurality of societal spheres, which causes them to be a concern to members or society.

There are steps that tech firms, social media platforms, and the government are taking to alleviate this problem. Facebook teamed up with researchers to create deep fake detection software. This program DeepFace identifies human faces by employing a nine layer neural network trained on over four million images to identify deep fakes with 97% accuracy.

The United States government has been addressing deep fakes through legislation. The 2021 NDAA, which recently became law, requires the Department of Homeland Security to issue a report on deep fakes every year for the next five years. The reports detail how deep fakes can be used for harm. The Identifying Outputs of Generative Adversarial Networks Act was signed into law in December 2020. As a result, deep fake technology and measures of authenticity will be researched by the National Science Foundation (Briscoe 2021).

As technology continues to improve, deep fakes will become more advanced, likely becoming indistinguishable from real video. Their potential harm needs to be addressed by all levels of society, from governments they attempt to distort, to viewers they manipulate, and social media platforms they use to spread harmful misinformation.

TikTok’s ‘Beautiful’ Algorithm

Image Credit: Kristen Crawford

TikTok is an app that needs no introduction. Its name has been plastered on television screens due to the national security concerns raised by President Trump and its active use among teenagers in schools worldwide. The appeal to easy fame and virality has its audience addicted, with about 732 million monthly active users worldwide (DataReportal, 2021). Popular TikTok creators like Charli D’Amelio, Addison Rae, and Noah Beck, have even gone on to become high fashion brand ambassadors, star in films, and appear on the cover of magazines. The connection between most creators and their fame is unsettling; nearly all of them are caucasian.

Facial Recognition Technology

Facial Recognition Technology (FRT) is a form of biometric technology that analyzes a person’s facial landmark map, such as the placement of their nose, eyes, and mouth, and compares it with a variety of other landmark maps in order to identify someone. The issue is how the technology misidentifies people, specifically people of color, because of its databank consisting of a majority of Caucasian facial landmark maps. NIST (National Institute of Standards and Technology) conducted a study in 2019 discovering that FRT is 10–100 times more likely to misidentify Asian and Black than Caucasian individuals.

“Middle-aged white men generally benefited from the highest accuracy rates.” -Drew Harwell, The Washington Post.

With this background knowledge in mind, I proposed a question that would later become an extensive research study with surprising results…

Does TikTok use facial recognition technology within its algorithm, and if so, how important is race when it comes to virality?

The Research

After a quick google search, I came across an article by a group of Chinese scientists at the South China University of Technology. This article was a proposition for the use of far more accurate technology that TikTok could implement into its system. It pointed out the failures of TikTok’s current software, and the successes of the one the researchers developed. Not only was facial recognition at play, but it was used to rank users on a scale of 1–5 based on their beauty.

Image Credit: South China University of Technology

The first page of the research proposal gives a summary of facial beauty prediction (FBP), which assesses a person’s attractiveness the same way facial recognition technology assesses a person’s landmark map for identification. FBP uses the fundamental infrastructure of FRT to rank a person’s beauty. I wanted to see what effect race had on virality, so for three months subjects of differing races (Asian, Caucasian, and Caucasian with Hispanic ethnicity) posted the same style and formatted videos with the same sound, at the same time, on the same day. The only difference between the videos was the subject in front of the camera.

The question shifted to the extent of a race’s virality. Who would be the most popular out of the three?

The Data

After three full months, the subjects’ analytics (their average amount of comments, likes, additional followers) were recorded and headshots were taken to mimic the way FRT creates facial landmark maps. Following a facial dot map (fig. 2) the basic structure of their faces were accurately depicted and displayed beside each other (fig.3–4). Through this, the differing landmark maps were compared along with their profile data.

fig. 2
(fig. 3)
(fig. 4)

Subject B was ranked highest in both the amount of likes and views. The startling discovery was not her newfound popularity, but the cause of it. Subject B was Hispanic, with fair skin and a small face. Subject C was Caucasian, with a bigger face and brow. Subject A was Asian, with a larger surface area around her cheeks and brow. Through this, I was able to discover that TikTok does not use FRT and FBP simply based on a person’s genetic makeup, but the composition and coloration of their face and features.

When comparing Subject B to the facial dot map of Charli D’Amelio the similarities were striking (fig. 5).

(fig. 5)

This evidence anecdotally supports the limited diversity of popular creators, yet it remains to prove why it’s more common to find a “white passing” individual or a person with caucasian features on your “for you” page.

If you need more evidence, don’t worry. Subjects used new accounts and didn’t interact with any of the videos shown on their For You Pages. When scrolling through their For You Pages for five minutes, all three watched only 48 videos, and an average of 41.66% consisted of white or “white passing” individuals.

The Conclusion

This research began with the question of whether or not TikTok used facial recognition technology and the extent that race played into virality. From the evidence found, I was able to answer this. TikTok, in its own way, uses facial recognition technology embedded in facial beauty prediction. Race seems to play an underlying role, however, it is not the ultimate deciding factor. Through my research I was able to discover that TikTok’s algorithm is sensitive, and is incredibly harsh to new users. Many claim your first five videos are the foundation to your career on TikTok, allowing the app to categorize your account based off of the content you produce. If your account is not managed correctly, you will be deemed an unreliable source of traction for the app and you’ll experience what many call a “flop” (TechCrunch). This just goes to show that the app relies on various factors when determining a user’s popularity, and race is definitely on the list.

This raises concerns for the future of AI and social media. Timelines, feeds, and pages already seem like algorithmic projections of unattainable beauty standards and ideals. The continued use and development of such technologies will continue to drive users down a path of insecurities and unconscious bias.

Artificial Intelligence & Protein Sequencing

Image Credit: DeepMind

Google-owned artificial intelligence firm DeepMind developed a system AlphaFold to solve the age-old ‘protein folding problem’ or answer the question of how a protein’s amino acid sequence dictates its 3D structure.

Proteins are made up of thousands of amino acids, and millions of small-scale interactions between molecules play into their 3D forms. Understanding the structure of just one protein requires years of work and highly specialized equipment. Thus, researchers have grappled with the complexity of protein folding for decades.

AlphaFold was trained on data from 170,000 known proteins, whose structures were deciphered the traditional way. Now, this technology has an average accuracy score of 92.4 out of 100 for predicting protein structure, and a score of 87 for more complex architectures.

Almost all diseases, such as Alzheimer’s disease, cancer, and COVID-19, involve protein structure, so AI opens the door for faster drug development and a better understanding of the biological processes underlying these health conditions.

Winner of the 2009 Nobel Prize in Chemistry, Venki Ramakrishnan, remarked, “This computational work represents a stunning advance on the protein-folding problem, a 50-year-old grand challenge in biology. It has occurred decades before many people in the field would have predicted. It will be exciting to see the many ways in which it will fundamentally change biological research.”

It will take some time to improve the algorithm’s predictive power and to bridge the gap between computer modeling and real-world pharmaceutical implementation, but AlphaFold will undoubtedly deepen our understanding of the role protein folding plays in a myriad of diseases.

Outside of medicine, AlphaFold could identify enzymes that break down plastic waste or capture carbon dioxide from the atmosphere, a useful tool in the long-standing battle against climate change.

Spotify Is Listening to You in More Ways Than You Think

Image Credit: Evan Greer

Streaming platform Spotify has amassed 155 million premium subscribers and 345 million active users monthly in over 170 countries and is expanding its services to 85 more countries in 2021. However, it has received harsh criticism for underpaying musicians and artists, continuing the illegal practice of Payola for big labels, and concealing its payment structures. The United Musicians and Allied Workers Union (UMAW) spearheaded a Justice at Spotify campaign last October with demands, such as increasing the average streaming royalty from $.0038 USD to a penny per stream, and the group protested outside of Spotify offices on March 15.

Most recently, the company filed an alarming patent to use artificial intelligence for emotional surveillance and manipulation, listening to users’ conversations, analyzing the sound of one’s voice, and curating targeted ads and music for one’s emotional state. The technology claims to identify “emotional state, gender, age, or accent” in forming its recommendations.

Fight for the Future describes what could soon be a terrifying reality: “Imagine telling a friend that you’re feeling depressed and having Spotify hear that and algorithmically recommend music that matches your mood to keep you depressed and listening to the music they want you to hear.”

Access Now, a nonprofit organization that defends and extends the digital rights of people around the world, sent a letter to Spotify dissecting how this new initiative would be extremely invasive and endanger users’ safety and privacy. It cites four major concerns: emotion manipulation, gender discrimination, privacy violations, and data security.

  • Emotion Manipulation: Monitoring emotional state and making decisions off of that creates a dangerous power dynamic between Spotify and its users and leaves the door open for emotion manipulation.
  • Gender Discrimination: Extrapolating gender through one’s voice and conversations will undoubtedly discriminate against non-binary and transgender individuals.
  • Privacy Violations: The artificial intelligence system would be “always on,” constantly monitoring what users are saying, analyzing their tone and language, and collecting sensitive information about users’ lives.
  • Data Security: Constantly collecting such personal and sensitive data about users’ personal lives would likely make Spotify a target of hackers, stalkers, and government authorities.

“There is absolutely no valid reason for Spotify to even attempt to discern how we’re feeling, how many people are in a room with us, our gender, age, or any other characteristic the patent claims to detect,” says Isedua Oribhabor, a U.S. Policy Analyst at Access Now. “The millions of people who use Spotify deserve respect and privacy, not covert manipulation and monitoring.”

Artificial intelligence is pervaded by racial and gender biases, and emotion recognition software like what Spotify is proposing is considered “a racist pseudoscience” at best. This technology poses a grave threat to independent artists and creators as well: what happens “when music is promoted based on surveillance rather than artistry,” Fight for the Future asks?

Evan Greer, a transgender/genderqueer singer, songwriter, activist, and Deputy Director of Fight for the Future, released the song and music video “Surveillance Capitalism,” part of the larger album Spotify is Surveillance. The song aims to raise awareness about Spotify’s emotion surveillance and manipulation, garner support for Fight for the Future’s petition demanding Spotify abandon the patent and promise to never use this invasive technology on its users, and support the Union of Musicians and Allied Workers by donating 100% of the artist proceeds from the song.

Sign the petition.

Tell Spotify to drop its plan to spy on your conversations to target music and ads.

www.stopspotifysurveillance.org

Artificial Intelligence and the Future of Elections

GoodWorkLabs
GoodWorkLabs

The days of grassroots campaigning and political buttons are long gone. Candidates have found a new way of running, a new manager. Algorithms and Artificial Intelligence (AI) are quickly becoming the standard when it comes to the campaign trail. These predictive algorithms could be deciding the votes of millions using the information of potential voters.

Politicians use AI to manipulate voters through targeted ads. Slava Polonski, PhD, explains how, “Using big data and machine learning, voters received different messages based on predictions about their susceptibility to different arguments.” Instead of going door to door, using the same message for each person, politicians use AI to create specific knocks they know people will answer to. This all takes place from a website or email.

People tagged conservative receive ads that reference family values and maintaining tradition. Voters more susceptible to conspiracy theories were shown ads based on fear, and they all could come from the same candidate. Politicians can make themselves a one size fits all through specialized ads.

AI’s campaign capabilities don’t stop at ads. After Hillary Clinton’s defeat in 2016, the Washington Post revealed that her campaign was centered around a machine learning tool called ‘Ada.’ More specifically, “the algorithm was said to play a role in virtually every strategic decision Clinton aides made, including where and when to deploy the candidate and her battalion of surrogates and where to air television ads” (Berkowitz). After Clinton’s loss, questions arose surrounding the effectiveness of using AI in this new way. In 2020, both Biden and Trump stuck to AI primarily for advertising purposes.

This means the utilization of bots and targeted swarms of misinformation to gain votes. Candidates are leading “ armies of bots to swarm social media to hide dissent” (Berkowitz). A post election analysis by the Atlantic found that around 20% of all tweets in the 2016 election were made by bots, as were over 30% surrounding the UK’s Brexit vote. Individual votes are susceptible to influence by social media accounts without a human being behind them. All over the globe, AI with an agenda can tip the scales of an election.

The use of social media campaigns with large-scale political propaganda is intertwined within elections and ultimately raises questions about our democracy. Users are manipulated to receive different messages based on predictions about their susceptibility to different arguments for different politicians. “Every voter could receive a tailored message that emphasised a different side of the argument…The key was just finding the right emotional triggers for each person to drive them to action” (Polonski).

The use of AI in elections raises much larger questions about the stability of the political system we live in. Our democracy rests upon the principle of free and fair elections, where people can vote without intimidation or manipulation. AI undermines this principle, manipulating voters, or even promoting “extremist narratives” (Polonski).

However, the use of AI can also enhance election campaigns in ethical ways. Polonski says, “we can program political bots to step in when people share articles that contain known misinformation [and] we can deploy micro-targeting campaigns that help to educate voters on a variety of political issues and enable them to make up their own minds”.

The ongoing use of social media readily informs citizens about elections, their representatives, and the issues occurring around them. Using AI in elections is critical as Polonski says, “…we can use AI to listen more carefully to what people have to say and make sure their voices are being clearly heard by their elected representatives”.

So while AI in elections raises many concerns regarding the future of campaigning and democracy, it has the potential to help constituents without manipulation when employed in the right setting.

Artificial Intelligence and the Future of Warfare

Military & Aerospace Electronics

Artificial Intelligence (AI) has made headlines for its use in everything from education to healthcare. Less well known, however, is its use by the American Department of Defense. Autonomous machines are able to complete many tasks without human supervision, from conducting missions to language translation, making AI highly useful in warfare. However, this new and innovative technology does not come without several shortcomings.

AI can be particularly useful when it comes to gathering intelligence about enemies due to the large data sets available for analysis. For example, Project Maven, a Pentagon AI project, uses algorithms to comb through footage taken by aerial vehicles to identify and target hostile activity. This system will help make more accurate and timely decisions, aiding analysts who would otherwise spend lots of time combing through footage themselves.

In regards to combat itself, AI has the capability to fundamentally shift the structure of warfare. As we shift from the Industrial Era of warfare, an era where weaponry was most consequential, information emerges as the most vital aspect of combat operations.

However, the use of AI in warfare does have some limitations. First, obtaining datasets that can be used to gain intelligence can be a challenge, especially for organizations that prefers to restrict access to data. Even if developers gain access to this data, most software system’s image-processing capability is not yet advanced enough to overcome flaws in photos, such as poor lighting or blurry quality.

Although AI and machine learning softwares can be used to make more efficient decisions, it cannot adapt well to unfamiliar situations. Unlike humans who are able to come up with solutions to various problems on the spot, AI can only work effectively in the specific situations they were programmed to deal with.

The debate over AI’s place in warfare is controversial. Supporters believe that autonomous systems have the ability to increase combat efficiency and limit the harm done to civilians. Critics don’t believe that AI should have the discretion to take human life. Ethical consideration has assigned culpability for any damage to the inventor of a lethal system. Many world powers have taken part in the NGO Campaign to stop Killer Robots, and no fully autonomous weapons are currently used in the United States.

Although AI in warfare is a controversial topic, looking at it from a holistic perspective allows us to see that it has considerable dangers. Technology is our future, so it’s important that we continue to prioritize safety and reliability.