Human Rights and Technology Consultation //
recent
أخبار ساخنة

Human Rights and Technology Consultation

Alsyd Eabidin
Home

About the Warren Centre for Advanced Engineering


The Warren Centre brings industry, government, and academia together to create thought leadership in engineering, technology, and innovation. We constantly challenge economic, legal, environmental, social, and political paradigms to open possibilities for innovation and technology to build a better future. 

Human Rights and Technology Consultation



The Warren Centre advocates for the importance of science, technology and innovation. Our 30 years’ experience of leading the conversation through projects, promotion, and independent advice drives Australian entrepreneurship and economic growth. 


This submission is our response to the Human Rights and Technology Issues Paper distributed by the Australian Human Rights Commission. The specific consultation questions addressed are 5, 6 and 7 related to AI-informed decision making.



Executive Summary


The Warren Centre observes that benefits and harms have been produced by artificial intelligence and machine learning technologies as those technologies inform decisions. We believe that very strong public benefits in economic growth and productivity improvements will be realised globally, especially through enhanced health outcomes from AI. It is important that machine learning and artificial intelligence continue to develop. However, attention must be applied to mitigate the negative effects of AI.


We offer the following recommendations:


  1. Additional research should be undertaken in the development of “ethical” frameworks for machine learning (ML) and artificial intelligence (AI). In some critical cases, the safety and efficacy of AI algorithms may need to be verified. 
  2. Efforts should be undertaken to open multi-disciplinary engagement and discourse on AI. This should include IT developers, engineers, lawyers, ethicists, government, industry, academia and the public.
  3. Greater public education and awareness are needed to understand the real benefits, risks and issues related to AI. Good public awareness informs better democracy and support for rational public policy.
  4. Ethical AI guidelines should be developed based on Australian values of privacy, fairness and transparency.
  5. Australia should establish and lead international dialogues on AI and robotics ethics.
  6. Enhanced STEM education along with diversity and inclusivity efforts are needed to promote domestic talent development and to minimise the potential for distorting effects of bias in machine learning and AI algorithms.


1. Current Situation


1.1 AI is advancing rapidly



In recent years, technology advances in sensors, digital commercial transactions, massive data accumulation, machine learning, artificial intelligence and sophisticated robotic actuation have created remarkable innovations and commercialisation demonstrations. There is very strong potential for continued improvement in economic productivity and advancement in human health from these technologies. There is concern for employment displacement, and it is likely that the rapid advancement of technologies will disrupt industries and jobs more quickly than some people can retrain and adapt. The interim technology disruption will create a period of social discontent as employment conditions adjust. Productivity gains will create wealth that is not evenly distributed. First movers in markets are already creating new platforms with radically improved services and product offerings that disrupt established companies. Globalisation in the past three decades has already exposed areas of Australia’s economy where international competition effectively displaces local economic activity.


Although there is not perfect agreement in technical communities on the definition of artificial intelligence, we agree with the AHRC’s suggested definition of ‘narrow AI’. Applying that definition, there are many examples of machine learning and narrow artificial intelligence operating today in Australia. 


1.2 AI is imported to Australia


Examples such as Google Maps have strong domestic Australian input into the development of the technology, but many of the AI systems currently operating in Australia were imported from the United States. Due to the scale of the economy and size of the population, there are distinct advantages for scaling up AI systems in the US and China. While Europe is also a large market with significant technological prowess and market capital, to date, it is American and Chinese firms who have leapt forward with the ‘killer apps’ that are consumer-facing. Facebook, Amazon, Apple, Google and Uber are building significant platforms for economic activity. Sometimes, these platforms are monopolies or oligopolies. Both the ML/AI-based business methods and the ML/AI algorithms are being tested overseas and imported to Australia. In China, TenCent, Alibaba, Baidu and Didi Chuxing are building similar models to their US counterparts and rapidly achieving great scale of user adoption.

Potential problems that are observed overseas are likely to appear in Australia.


2. Several problems already observed or likely to appear


Problem 2.1: Bias in underlying historical data


Machine learning and artificial intelligence rely on a base of historical data. In numerous examples, it has been demonstrated that empirical data contains bias. Banking and lending data, criminal justice data, and insurance data have been shown to contain bias that harms the public. Bias inherent in historical data may not be easily detected. These problems are highlighted in the Issues Paper.


Problem 2.2: Bias programmed into technology by biased humans


Section 4.2 of the Issues Paper describes sexism and LBGTI cyber abuse in online forums and Section 7 of the Issues Paper describes the need to consider diversity and inclusivity when designing technology. However, programming communities show evidence of strong sexism and cultural bias.1 Programming in Silicon Valley can be so dominated by young white males that the term “brogramming” is applied by some commentators. Exclusion of females and severe problems of cyber bullying in the industry itself highlight a dysfunctional inclusivity gap. These problems are said to exist also in the Australian tech industry. The issue of stubborn resistance within the industry to reform was highlighted in 2017 when a Google employee issued an essay against diversity titled, “Google’s Ideological Echo Chamber”.


The result of this inclusivity gap affects not just the tech industry workplace but also the ubiquitous products of the tech industry. How software is programmed may deeply affect the diverse range of human users who directly use the software and also those who are indirectly “handled” by the software: everyone in modern society.


Problem 2.3: Economic factors affect governance


It is a principle in liberal economies that markets should not be unnecessarily regulated unless there is a clear defect or mischief. The scope of this consultation is human rights, not market operation, but it is worth reviewing some of the economic factors that interact with individual rights. Market failures exist in the rapid expansion of data-rich technology companies. Data feeds machine learning, and machine learning informs artificial intelligence algorithms. The market failures include asymmetrical information, anti-competitive behaviour, predatory pricing and unequal bargaining power. These factors interact with the human rights issues raised and exacerbate problems related to bias and fairness. An AHRC consultation should not ignore factors that would fall before the Australian Competition and Consumer Commission.


Governance problems in the current generation of tech companies and platforms are chronicled in an exposé published in 2017 by Brad Stone.4 Two quotes from Stone’s book demonstrate the scale of the governance challenges:


“We are living in an era of robber barons. If you have enough money and can make the right phone call, you can disregard whatever rules are in place and then use that as a way of getting PR. And you can win.”

---------------------------------------------------------------------


Both were unleashing changes in communities’ behavior whose full impact on society they couldn’t possibly hope to understand. And each believed that the best tactic was simply to grow, harnessing the political influence of their user base to become too big to regulate.


This year, Apple and Amazon each surpassed US$1 trillion market capital. Although these companies are comparatively mature, some of the rising technology entrants are attracting billions of dollars of venture capital to fuel the race “from zero to one”, i.e., to achieve a monopoly. The future financial rewards are massive, and the immediate investments are very substantial-- multiple billions of dollars of cash. 


In some cases, the competition is a winner-take-all monopoly. In such a hyperbolic, frenzied environment, there may be little incentive to slow down and consider fairness or unintended harmful consequences. These are fiercely competitive economic races to establish and dominate new markets. In the case of rapid races in development, technologists can feel pressure to cut corners, to take risks and to ignore near misses. In these cases, the economic and governance factors interact with the human rights issues in this consultation.


Problem 2.4: Asymmetry in know-how, hardware and data


In the US bricks-and-mortar retail sector, an atmosphere of extreme competition has spawned a disturbing use of electronic surveillance techniques in shopping malls and street level stores. UPenn Professor Joseph Turow catalogues some of the strange scenarios in The Aisles Have Eyes: How Retailers Track Your Shopping, Strip Your Privacy, and Define Your Power.


Security camera technologies originally deployed to document and discourage shoplifting are being repurposed with facial recognition technology to monitor shoppers’ purchasing habits and to accumulate massive amounts of data on consumers. Combined with the ‘digital vapour trail’ of personal devices, retailers can employ sophisticated tracking techniques using Bluetooth, Wi-Fi, and GPS along with the retailers’ own electronic payment systems to collect massive amounts of data. Information from various sources can be synthesised in ways that are not understood by average people. Data collection across multiple devices and platforms is integrated across consumer-owned electronics and store-owned fixed hardware. 


Privacy policies that are obscure to buyers provide insufficient notice to persons who do not understand the broad and deep level of their personal interaction with data collection, machine learning and artificial intelligence.


Problem 2.5: Experimenting with people’s private data



One example of the mischief in the area of data acquisition, machine learning and AI is the 2014 case of OKCupid. 


The online dating app gathered information from users to match couples for dates. In an online blog, OKCupid co-founder Christian Rudder proclaimed, “We experiment on humans!”8 In some cases, OKCupid deliberately matched the worst fitted partners to test if the normal dating match algorithm worked. 


Shallow answers ensued when the company was challenged about whether the practice was ethical. Subsequent third-party publication and analysis of OKCupid data by academics in Europe has occurred. The incidents prompt questions about whether users of the site should be afforded informed consent before their private data is used for experimental purposes.


Problem 2.6: Exploiting cognitive bias


Society must value human health and human wellbeing to sustain itself. However, in free societies, adults have the right to exercise free will, make their own decisions and pursue their own personal happiness.


Corporations advertise and market to consumers. Some industries such as tobacco, alcohol and gambling harm consumers. Adult choice in these product segments is confounded by addictive behaviours. There is an appearance of freedom and personal choice, but in some cases the marketers exploit human nature to align product placement towards cognitive bias and addictive weaknesses. 


The social effects have economic consequences. Financial gain moves from the consumer to the marketer, but harms and financial losses are borne by the unwitting consumer or socialised across the community through health insurance costs. In the cases of tobacco, alcohol and gambling, governments have stepped in to regulate or to apply offsetting taxes. In recent years, some jurisdictions have implemented regulations or taxes on sugar citing recent research of addiction harm to public health. 


Online platforms, sensors, data acquisition systems, machine learning and artificial intelligence systems have great power to understand users. Vulnerabilities could be exploited for commercial gain. Intellectual, emotional, compulsive or addiction weaknesses could be targeted. The outcomes could be economic or human rights outcomes detrimental to vulnerable people. Emerging technologies such as virtual reality and augmented reality will have much greater power than printed paper media, radio, television or today’s internet. Technologies can be developed that “gamify” user activities and exploit gambling types of addictive behaviours.


Economic models of the past do not explain how artificial intelligence might be used to learn and exploit cognitive bias. The behavioural science in this area is relatively new.12 Consumers do not always act rationally in their own best interest. Purchasing behaviours might be exploited with clever “choice architecture”, and emerging technologies might amplify or exaggerate human weaknesses.


Political-economic descriptions might be stuck in an old-fashioned Adam Smith era. The modern concept of Moore’s Law and the exponential growth of transistors and computing power drives the rate of technology deployment. Sometimes that deployment comes in bursts. For example, one generation of a smart phone integrates GPS location, wifi-enabled maps and cloud-based computation to enable taxi substitute ride share apps very quickly. Economic disruption to an industry occurs very rapidly. 


This is the definition of the “killer app” that goes viral. A circumstance could be imagined whereby a rapid-growth, AI-informed marketing decision maker deploys opposite a vulnerable population of consumers. The (legal) opioid epidemic has caught pharmaceutical regulators off guard by its speed. 


digital epidemic that affects consumer economics or that exploits the privacy, security, safety or fair treatment of users through exploiting cognitive bias is an easily foreseeable future scenario. 


In some respects, this is already occurring. The issues outlined so far would align. Rapid growth companies could be “too big to regulate”, too fast to regulate or too wealthy to regulate (Issue #3). They might possess asymmetrical technical hardware, technical know-how and data in a contest versus average people (Issue #4), and they might experiment on their users (Issue #5) an exploit addictive weaknesses or cognitive biases.


google-playkhamsatmostaqltradent