Deep Fakes: A Threat to Truth

Business Insider

In most situations, it is assumed that video footage is fact. What is found on a security tape is indisputable evidence, and fiction is best left to cartoons and special effects. Deep fakes, or the use of artificial intelligence algorithms to change a person’s face to that of another, blur the lines of fact and fiction. What was once an easily dismissible fake news headline now is bolstered by video evidence. As the algorithms that create deep fakes become smarter, many question the consequences and of potentially slanderous deep fakes, and legislative approaches to mitigating their harm.

In 2020, a State Farm commercial played on ESPN’s The Last Dance. The commercial appeared to showcase a 1998 clip of an analyst from ESPN making an accurate prediction about the year 2020. This clip was a deep fake. The clip was generated with the help of artificial intelligence software. Viewers likely realized that the clip was fake, but they might have not considered the ethical implications of the video and subsequently, all deepfakes.

At the beginning of 2019, the number of deepfakes on the internet nearly doubled over the span of three months (Toews 2020). As artificial inteeligence technology continues to improve, this growth will continue. While some deep fakes, such as the doctored clip of the analyst, are lighthearted, malicious deep fakes pose a serious threat to society. One example of this is deep fakes in the political world. Deep fakes can be a powerful mechanism for destroying a public figure’s credibility by distorting their words, as well as spreading false information to the individuals who view them. Deep fakes can cause harm in a plurality of societal spheres, which causes them to be a concern to members or society.

There are steps that tech firms, social media platforms, and the government are taking to alleviate this problem. Facebook teamed up with researchers to create deep fake detection software. This program DeepFace identifies human faces by employing a nine layer neural network trained on over four million images to identify deep fakes with 97% accuracy.

The United States government has been addressing deep fakes through legislation. The 2021 NDAA, which recently became law, requires the Department of Homeland Security to issue a report on deep fakes every year for the next five years. The reports detail how deep fakes can be used for harm. The Identifying Outputs of Generative Adversarial Networks Act was signed into law in December 2020. As a result, deep fake technology and measures of authenticity will be researched by the National Science Foundation (Briscoe 2021).

As technology continues to improve, deep fakes will become more advanced, likely becoming indistinguishable from real video. Their potential harm needs to be addressed by all levels of society, from governments they attempt to distort, to viewers they manipulate, and social media platforms they use to spread harmful misinformation.

AI-Powered Detection of COVID-19

The COVID-19 pandemic is the defining public health crisis of the 21st century, and efforts to improve treatment, diagnostic testing, and prediction of clinical severity are paramount. Leading researchers across the globe are employing AI to automate parts of the COVID-19 response.

Image Credit: Healthcare Global

AI To Detect COVID-19 Through Cough Recordings

A team of researchers at MIT developed an algorithm that identifies the coughs of asymptomatic people with COVID-19 using patterns in four vocal biomarkers: vocal cord strength, sentiment, lung and respiratory performance, and muscular degradation. The MIT Open Voice Model uses acoustics to pre-screen for COVID-19 from cough recordings before a viral test. The model was tested on cough recordings from over 5,000 individuals, and it accurately identified 98.5% of coughs from people with confirmed COVID-19 and 100% of coughs from asymptomatic people who tested positive for the virus.

The group is developing a smartphone app that would serve as a “free, non-invasive, real-time, any-time, instantly distributable, large-scale COVID-19 screening tool” and are awaiting FDA approval for launch.

AI To Detect COVID-19 in Chest X-Rays & Predict Severity of Cases

Researchers from New York and China developed an AI-based tool to predict future clinical COVID-19 severity, allowing for early intervention. It could help physicians assess which patients with moderate COVID-19 symptoms can safely go home to recover and reduce already heavy burdens on hospital staff and resources.

Taking into account demographics, laboratory data, and radiological imaging, the study analyzed the differences in patients with mild symptoms, cough, fever, and upset stomachs, who went to develop severe symptoms, such as pneumonia and Acute Respiratory Distress Syndrome (ARDS) or fluid build-up in the lungs, versus those with the same initial symptoms who did not.

The team found that changes in the levels of liver enzyme alanine aminotransferase (ALT), reported myalgia, and hemoglobin levels were most accurate in predicting severe COVID-19, not markers considered hallmarks of the disease like patterns in lung images, fever, strong immune response, age or gender. Altogether, the model predicted the risk of developing ARDS after mild COVID-19 symptoms with 70–80% accuracy. The model is still in its early stages, only trained on a small dataset of patients from two hospitals, but it could be vital in early intervention and allocation of hospital beds as COVID-19 cases continue to rise.

Similarly, radiologists at the University of California at San Diego are using AI to augment lung imaging analysis to find signs of early pneumonia. The machine-learning algorithm overlays color-coded maps showing the probability of pneumonia over a patient’s x-ray. Chest x-rays are a cost-effective and quick diagnostic tool to predict the future severity of a patient’s COVID-19 case and the probability of developing pneumonia.

Image Credit: University of California

AI can provide quick, accurate, and non-invasive diagnostic testing for COVID-19, help healthcare professionals predict the future severity of cases, and determine which patients can go home safely when hospital resources run low.

Machine-Learning Model Can Inform Quarantine Measures to Reduce the Spread of COVID-19

source: Jonas França

A machine-learning model developed by researchers at MIT analyzes and compares how the number of COVID-19 infections across 70 countries in Europe, North America, South America, and Asia differed with how effectively the nations’ governments maintained their quarantine measures. This diagnostic tool is highly accessible and trained on all publicly available COVID-19 data sets, so it could help policymakers inform better quarantine measures across the globe.

The model is based on a traditional SIR model, an epidemiological model used to predict disease spread based on the number of people who are considered “susceptible,” “infectious,” or “recovered” It was enhanced with a neural network then trained on international COVID-19 data to identify patterns in infections and recovery. To determine ‘quarantine strength,’ the algorithm calculated the number of infected individuals who are not transmitting COVID-19 to others by following the quarantine measures in place in their region. As new data is published, the model can show how that area’s quarantine strength changes and evolves over time as safety regulations change.

The MIT team focused on the United States and used their model to calculate how effectively a state has enforced its safety measures and limited the spread of the disease. In spring and early summer, parts of the southern and central United States began reopening businesses and relaxing strict quarantine measures, which led to a sharp increase in COVID-19 cases in those regions. The model calculated that if these states did not re-open so early on or re-opened with strictly enforced safety measures like wearing masks and social distancing, 40% of COVID-19 infections could have been avoided. In Texas and Florida, specifically, maintaining stricter quarantine and stay-at-home measures would have avoided as many as 100,000 infections.

The research paper’s lead author, Raj Dandekar, a graduate student in MIT’s Department of Civil and Environmental Engineering, emphasizes that “If you look at these numbers, simple actions on an individual level can lead to huge reductions in the number of infections and can massively influence the global statistics of this pandemic.” As the number of COVID-19 cases across the United States rise and cities like Los Angeles run out of available ICU beds, this machine-learning model could be vital in informing what level of quarantine measures to put in place. Co-author Christopher Rackauckas, an applied mathematics professor at MIT, says “What I think we have learned quantitatively is, jumping around from hyper-quarantine to no quarantine and back to hyper-quarantine definitely doesn’t work. Instead, good consistent application of policy would have been a much more effective tool.”

This novel machine-learning model can help policymakers determine the best course of action for quarantine measures in different countries, illuminate patterns of COVID-19 spread across different demographics, such as socioeconomic level and race, and save millions of lives.

How AI Could Help Predict and Reverse the Effects of Climate Change

Image credit: Foreign Affairs.

From Beijing to Great Britain, companies are using innovative new technologies to reverse the effects of climate change.

Google has been reshaping its data centers by lowering its total energy consumption using machine learning (ML), the study of computer algorithms that improve automatically through experience, which will be useful for both the climate and Google since the company plans to open more of these data centers. DeepMind, its London-based AI unit, is using information collected by sensors to reduce the data centers’ energy use for cooling by up to 40 percent. The same technology is also being implemented to prognosticate the clean energy output for Google so that the company can manage how much conventional energy it actually needs.

Cognitive computing, which describes technology platforms based on its scientific disciplines of AI and signal processing, along with superior data processing ability pairs up with the Internet of Things (IoT), a system of interrelated and internet-connected objects that are able to collect and transfer data wirelessly, to predict pollution rates in Beijing. The system uses ML to ingest data from sources such as meteorological satellites and traffic cameras to constantly learn and adjust the predictive models. It is able to forecast pollution 72 hours in advance, with accuracy down to the nearest kilometre to detect where the pollution is coming from and where it will likely go. Beijing’s government is using this methodology to reduce pollution levels ahead of the 2022 Winter Olympics. It can use these predictions to implement policies like temporarily restricting industrial activity or limiting traffic and construction in certain areas. Cognitive computing and ML also create models that will allow officials to test the effectiveness of such interventions.

The Allen Coral Atlas, an initiative committed to studying the evolving coral reef of the world’s oceans, is using satellite images and AI image processing to detect changes and locate the reefs that might face threats that emerge from global warming and ocean pollution.

In Singapore, a Digital Innovation Lab has been mastering emerging technologies to ensure the continuity of tech-based climate change initiatives. The lab is building technology that can optimize public transport routes and decrease carbon emissions from vehicles. It has also developed the technology to track the rise of sea levels and their impact on marine health. Another project is tracking food provenance, checking on the quality of nutrition and the chemical composition of the food. Making these technologies accessible to partners across Asia lowers the barrier for new agencies to use them. This should result in a boost in the number of agile project-management and design-thinking climate change solutions.

The push of using machine learning builds on the work already done by climate informatics, a discipline created in 2011 that sits at the intersection of data science and climate science. Climate informatics use data collected from things like ice cores, climate downscaling or using large-scale models to predict weather on a hyper-local level, and the socio-economic impacts of weather and climate. AI can also unlock new insights from the massive amounts of complex climate simulations generated by the field of climate modeling. Of the dozens of models that have since come into existence, all represent the atmosphere, oceans, land, cryosphere, or ice. But, even with agreement on basic scientific assumptions, Claire Monteleoni, a computer science professor at the University of Colorado, Boulder and a co-founder of climate informatics, points out that while the models generally agree in the short term, differences emerge when it comes to long-term forecasts. One project Monteleoni worked on uses machine learning algorithms to connect the predictions of the approximately 30 climate models used by the Intergovernmental Panel on Climate Change. Better forecasts can help officials make informed climate policy, enable governments to prepare for change, and potentially uncover areas that could modify some impacts of climate change.

Some homeowners have already experienced the effects of a changing environment. For others, it might seem less substantial. To make it more practical for people, researchers from Montreal Institute for Learning Algorithms (MILA), Microsoft, and ConscientAI Labs used Generative Adversarial Networks (GANs), a type of AI that generates new data with the same statistics as the training set, to simulate what homes are anticipated to look like after being damaged by rising sea levels and intense storms. So far, MILA researchers have met with Montreal city officials and Non-governmental organizations (NGOs) longing to use the tool. Future plans include releasing an app to show individuals what their neighborhoods and homes might look like in the future with different climate change outcomes. But the app will need more data, and eventually, let people upload photos of floods and forest fires to improve the algorithm.

Carbon Tracker is an independent financial think-tank working toward the UN goal of preventing new coal plants from being built by 2020. By monitoring coal plant emissions with satellite imagery, Carbon Tracker can utilize the data it gathers to convince the finance industry that carbon plants aren’t profitable. Google is expanding the nonprofit’s satellite imagery efforts to include gas-powered plants’ emissions and get a better understanding of where air pollution is growing from. While there are continuous monitoring systems near power plants that can measure CO2 emissions more conveniently, they do not have a global reach.

AI is a tool in our arsenal if we hope to achieve our UN -1.5 degrees goal and beyond; it acts as a fuse bomb to speed up the process of fighting climbing change by giving accurate and precise information about conflicting climatic factors around us. With AI on our side, we can defeat climate change in the long run.