With the increased reliance on algorithms to grant housing loans, determine credit scores and other aspects of mobility, there has also been a rise in overcharge and loan denial for minority applicants. Recent studies revealed that digital discrimination has extended to the housing market as well, and this poses a serious issue in how marginalized groups are able to attain social mobility if the algorithms are implementing biases in this seemingly “race-blind” decision process.

In the US, there has been a history of systemic bias in the housing market. The housing program passed under the New Deal in 1933 led to widespread state-sponsored segregation that granted housing to mostly white middle or lower-middle-class families. Furthermore, The New Deal’s focus on the overdevelopment of suburbs and incentivized development away from the city led to a practice known as redlining, in which the Federal Housing Administration refused to insure mortgages in and near African-American neighborhoods. Redlining establishes risk assessments of community housing markets based on the social class and racial makeup and based on this risk assessment, many predominately African-American neighborhoods were not deemed worthy of the mortgages. These practices left a lasting impact on inequality in the US because upward mobility is impossible if systemic barriers are not removed. In 1968, the Fair Housing Act was created to combat redlining and other practices, stating that “people should not be discriminated against for the purchase of a home, rental of a property or qualification of a lease based on race, national origin or religion”. The Fair Housing Act did mitigate this issue, however, the introduction of unethical AI practices in housing practices has provided a way to continue the racial discrimination of the 1930s.

Algorithms and other forms of machine learning are utilized in granting housing loans and other steps in the housing application because they allow for instantaneous approval and are able to process and analyze large data sets. However, because of the millions of data points that these algorithms process, it can be difficult to pinpoint what causes the algorithm to reject or accept an applicant. For instance, if an applicant lives in a low-income neighborhood, their activity may indicate that they are often with others who cannot pay their rent, and because of the interconnection of these data points, it is more likely that the applicant would not make their payments and the housing loan application is denied. With 1.3 million creditworthy applications of color rejected between 2008 and 2015, the use of technology in housing AI has demonstrated the underlying discrimination that exists in the upward mobility of minorities; the people that create these algorithms are focused on generating revenue and oftentimes human biases enter algorithms because they are created by humans. Because of the assumption these technological systems are bias-free, this problem has even extended to credit scores as well. International companies such as Kreditech and FICO are gathering information from applicants’ social media networks and cellphones to gather the type of people the applicant is with to determine if they are reliable borrowers. This disproportionately impacts low-income people who have reduced mobility due to factors outside their control such as their zip code or social class.

So what has been done to mitigate this issue? A proposed ruling by the Department of Housing and Urban Development in August 2019 stated that landlords and lenders who use third-party machine learning to decide who can get approved for loans cannot be held responsible for discrimination that arises from the technology used. Instead, if applicants feel discriminated against then the algorithm can be broken down to be examined, however, this is not a feasible solution to this problem because, as previously mentioned, the algorithms utilized are extremely complex and there is not one singular factor or person at fault for this systemic issue. Instead, advocates for racial equality believe that transparency and continuous testing of algorithms with sample data can be a reliable solution to this issue. Furthermore, the root of the problem must be addressed in how these systems are designed in the first place due to a lack of diversity in the technology career field. If companies were more transparent about the machine learning systems used and had increased diversity in technology spaces to recognize if and when there is racial bias in artificial intelligence, then we can all be one step closer to solving this long-standing issue.

Related Posts

See all recent posts