Balancing personal and machine responsibility

Think Gradient
thinkgradient
Published in
21 min readFeb 8, 2019

--

Guest Author: Jon Machtynger

During Usain Bolt’s World record 100m sprint at the 2009 Athletic Championships in Berlin, he covered the distance between the 60th and 80th meter in just 1.61 seconds, about 44.7km/h. I couldn’t beat him in a fair race, but I can confidently claim that with super-human abilities, I could travel at 100mph with very little effort. I’ll just need a little help from a car. Traveling at 100mh is a superhuman act. Just to really rub it in, I might even race him over 100 miles averaging 100mph. Something so trivial, was impossible 200 years ago, yet now readily accessible, and an excellent example of how humanity partnering with a little bit of technology delivers disproportionate outcomes.

If we ignore forays into autonomous vehicles, this is a true partnership with shared responsibility. Without the human driver, the car couldn’t travel anywhere. Without the car, I’d be constrained to the limits of human physiology, and my limits are clearly lower than Bolt’s. I’m responsible for learning how to drive, keeping the car in working order, ensuring sufficient fuel, and driving on appropriate roads, and (other than this example) driving below the speed limits. The car (in effect, the manufacturer) is responsible for performing to specification, providing warnings of malfunctions, using intelligent features such as ABS, and power steering to make my driving experience safer and more comfortable. Together, we deliver superhuman capabilities. If I have an accident, and the car was performing perfectly, then responsibility for any damage is probably mine alone. On the other hand, if there’s a design flaw or a faulty part was not picked up during regular maintenance, then I could argue that I did everything I possibly could to shoulder my responsibility, so accountability might lie elsewhere. Before autonomous vehicles, this was how humans partnering with auto technology. Our responsibility in that partnership is usually very clear, even if it’s not always followed. For example, many people run out of fuel, despite an indicator to notify them to add more. Many people don’t maintain their cars, despite many warning lights and beeps, and they still break down. Many people exceed well-known speed limits and get fined.

The key issue here is that we have a choice in how we engage with that partnership. I’ll keep returning to the issue of choice as it is core to our relationship with technology and other human beings.

Choosing is a privilege, and a bit of a pain, especially when the choices are mundane. The problem with automating decisions for people who can’t or won’t do the responsible thing is that we effectively take that choice away from them. This isn’t just a philosophical idea, but fundamental to a life lived freely and should form a core basis of our relationship with technology going forward. Would we, for example, remove alcohol from supermarkets because some people won’t drink responsibly? Would we remove knives from supermarkets because they’re a weapon? The answer is probably no. Making flawed decisions is part of the human condition and a basis for societal evolution. Without flawed experiences, we wouldn’t stumble onto innovations. We wouldn’t highlight or reflect on things incompatible with our values, and we wouldn’t be in a position to change in ways that might benefit us all. In short, a laudable aim of minimizing physical and cognitive energy results in an unintended consequence of denying us the right to make our own mistakes. It’s subtle but significant. Machines have a track record of doing things efficiently, but let’s not confuse accuracy with social interpretations of good, right, fair etc. They couldn’t be more different.

The field of artificial intelligence is around 70 years old. Predictions of both a utopian generalized AI in every house, as well as doomsday scenarios of a technological singularity have so far proved unfounded. Automated systems have given way to autonomous processes and far from making our lives easier, it’s complicated things dramatically — mostly because the problems we’re encountering are not technical but human.

Supervised learning typically tries to minimize what is known as a cost or loss function. This is effectively the error or difference between a machine’s prediction and the correct value. The aim is to make that difference as small as practical so that future novel inputs generalize to more accurate predictions.

Let’s look at a simple fictional scenario. A hospital admissions system is used to determine the kind of care patients are offered. In an ideal world, we’d have an infinite resource in terms of physicians, medicine, hospital beds, time etc. The reality is that those resources are limited and getting scarcer over time. So, when a patient is brought in, what sort of treatment do they get? Should they even be treated? How do we decide? Before you start thinking that this is inhumane, remember that this is a standard decision-making process on the battlefield. Wars are relatively rare events, but when they do happen, physicians are asked to decide very quickly who to try to save, and who they believe will die anyway.

Back to our fictional hospital system. Patients turn up, and a system decides. Those who get to the next level see a physician and hopefully, they get better. In this scenario, our first patient is an 80-year-old man with a long history of heart failure. He’s come in after experiencing a minor stroke. Our second patient is a 23-year-old woman with a collapsed lung. Who should the system choose to treat? Assuming that each of them will have successful treatment, we only have enough resource to help one. Before you pre-empt the system’s decision, let’s consider our cost function. If we maximize life expectancy, then the woman should probably go forward, and the man will die. She would probably live another 60–70 years, while he may not live another 10. On the other hand, what if our loss function minimized the long-term financial cost to the health system? The woman has a lifetime of upcoming costs, and her potential future 2.4 children will also place a substantial load on the system. The man has very little time left, and he won’t be having children. In this scenario, the man goes forward, and the woman dies. But what if the loss function related to social importance? What if the man was Nelson Mandela? What if the woman was heir to the throne or CEO of a billion-dollar company?

When people are asked to make these decisions, more context can be considered. We can balance simultaneous loss functions, and even ignore rules if we choose. Either way, we get to choose, and we’re accountable for that decision.

Let’s consider a classic thought experiment in ethics — the trolley problem. A runaway trolley hurtles toward five people tied up on the tracks. You stand next to a control lever. If you pull the lever, the trolley redirects to a side track, saving the lives of those five people. However, there is a single person tied up on the side track. Do you allow the trolley to kill the five people on the main track, or pull the lever, diverting the trolley onto the side track where it will only kill one person? Which is the more ethical option?

Let’s go a step further and imagine that this dilemma is being faced by an autonomous car with the conundrum defined along different dimensions — age (old, young), gender (male, female), passengers or pedestrians, thin or fat, social status (high or low), humans or animals, lawful or criminals and so on. How should the car decide? A recent Nature publication titled The Moral Machine Experiment described a survey of millions of people in 10 languages, from 233 countries and territories. Globally, there was a general preference for sparing humans over animals, pedestrians over passengers, and the young over the old. On national and cultural levels, however, there were stark differences, which might appear almost inhumane to different groups. The real concern here is that this is not a hypothetical situation. Car manufacturers and policymakers are struggling with exactly these sorts of moral dilemmas, and society will vote with their wallet to implicitly support a technology provider’s algorithms and decisions. How would you feel as a passenger in an autonomous car in a foreign country if it didn’t subscribe to your personal safety preferences?

One supposed benefit of an automated system is that it focuses on the data ignoring subjective biases. However, the web is filled with examples of how machine learning can be intrinsically biased. AI has been accused of racism, homophobia, xenophobia, misogyny, and being an active oppressor of the underclass. There’s enough evidence to show that poor deployment of AI systems has indeed contributed to inequality. That, however, would be little different to damning the steel industry for being complicit in providing weapons of war.

Herein lies the crux of this article. AI is simply a tool. It’s a very sophisticated tool that provides capabilities that people for the most part, simply aren’t cognitively effective at. Driven by venture capitalist expectations, competitive pressures, and unreadiness at both individual and societal levels, we have grabbed the AI golden goose demanding it lay eggs at an impossible rate. Society, organizations, and individuals have made AI accountable for their successes and failures rather than spend more time reflecting on why those things have succeeded and failed. It would be no different from an amateur musician investing in a high range guitar, disappointed that he can’t play like Jimi Hendrix.

Proper AI deployment requires a blend of good mathematics, ethics, legislation, system and software architecture, social context, and patience. We can develop technical, legal, and social frameworks to help make this more accessible. We can blend ethics and system thinking into how we define our success criteria. However, at some point, the ownership of the actual problem must come down to a human being. This trend of giving someone (or something) else ownership and responsibility of much of our day to day living has progressively lowered the bar to the point where we’re tripping over it.

A valid critique of AI is that it is inherently biased. Unfairness creeps into the decision-making process and injustices occur. This is true in many instances. The data set used to train their algorithms may have been skewed, too small, poorly chosen, or just wrong. Much of this unfairness was unknowingly committed. That doesn’t justify it, but it does explain it. So, what do we do? A year ago, I spoke with a well-meaning consultant who confidently told me that she could remove all the bias in data sets. Apparently, this was going to revolutionize the industry and make automation ‘fair’. My observation to her was that she wasn’t removing bias but adding her own counter bias. Her data was just as biased, but in a way that more comfortably suited her agenda. She couldn’t accept that well-intentioned bias was still bias because she was avoiding a specific injustice.

As a simple example, using historical career and salary data to help people understand what their next job might look, like seems a good task for AI. Let’s dig into the data though. Historically, there was more of a gender pay gap than today, which might lead to females using this new system being recommended lower paying opportunities. Further, the distribution of leadership positions then and now is different. Of more importance though is that many jobs that exist today simply didn’t exist 10 years ago. How could such a system not be very flawed? With so many accessible open source frameworks available, guides on how to ‘do AI’ everywhere, and so much data. How hard can it be to build your own system? Actually, it can be very hard indeed and is one reason why good data scientists come at a premium. A data scientist would have asked much of the above questions very early on. Let’s also reflect on the fact that not every problem needs a data scientist. A new category of worker known as the Citizen Data Scientist has recently started to appear. With the right tools, they can extract fantastic insight into their data. With less need for data scientists, how does good data hygiene become more commonplace?

Not all AI problems are equal and not all skills to solve them can be encapsulated in an easily digestible piece of open-source software or cloud-API. Consciously deciding to use this technology is all important. So, we come back to the theme of personal accountability. Who decides what data to use, what system to feed from and to, and what happens should anything go wrong? Who decides what framework is appropriate and how it’s maintained over its lifetime? Ultimately, accountability and personal responsibility need to be built into our system design in the same way as we might consider resilience, cost-effectiveness, scalability etc.

Who’s responsible though? Is it the designer of the system, the programmer, the marketing team who are the face of the capability’s value proposition? If open-source frameworks are being used, are historical contributors to those frameworks responsible? At some point, the user of the system must accept some responsibility. Our choice to use certain technologies must come with a conscious acknowledgment of our role in using them. Do you want fingerprint recognition built into your phone? OK, reflect on what rights you now lose if you’re asked to hand over your phone at a border. Do you want customized news in your feed? OK, then reflect on what data is being used about you to provide that personalized experience. Do you want a more secure world where AI is able to easily catch the bad guy and let the good guys live their lives? OK, then reflect on how laws will change, and how boundaries between privacy and security will shift. Want to be automatically notified when you’re near your favorite restaurants, shops, friends? OK, reflect on how that data might be used to gain insight into your movements. These aren’t simple questions, but they are common scenarios and they affect us all. They’re not remotely hypothetical. I’m saying that we should regularly ask these questions and embed that conscious reflection into our design process, and our understanding of social norms. This is not a technical discussion, but a human one.

Collective irresponsibility exists within social structures. Assuming you don’t need to worry about something because someone else is dealing with it leads to the situation where many are indeed worrying about it, but no-one is dealing with it. This slippery slope led to some well-publicized examples of industrial accidents and financial incompetence. Individuals followed corporate guidelines in good faith, but the guidelines were no longer fit for purpose. The guidelines hadn’t evolved to suit the facts on the ground. Punishment for corporate malfeasance has progressed from financial penalties to individual prosecution of executives and leaders who should have behaved responsibly. This alters the perception of limited or constrained liability often present in some companies and has led to distinct behavioral modifications in large organizations. When a CEO can be personally jailed for unethical practices within an organization, then things start to change. How soon before that awareness starts to trickle down to smaller organizations, local communities, and social structures?

Let’s come back to personal responsibility when used in conjunction with AI. If AI is a true partner in the process, providing the super-human capability to its users, then as with any partnership, boundaries should be customizable. If the partnership no longer works, it should be possible to terminate or modify it. The issue with many systems is that they are black boxes. They automate processes, but in fact, are being given autonomous roles. Automation for the purposes of speed and efficiency is one thing and typically runs to a well-defined set of rules and conditions. Autonomy, on the other hand, assumes novel situations. It isn’t necessarily focused on speed. It may also re-learn over time based on historical data and new contexts. This is an attractive proposition and a seductive slippery slope. Let’s not confuse the two.

In the 90s, I saw a comedian referring to people who couldn’t operate a video recorder as 12 O’clock flashers. Their VCR constantly blinked 12:00 reminding them to set the clock. They could press PLAY, FF etc. but if they couldn’t even work out how to set the time, then recording programs for a future time was beyond them. These days, the web is filled with guides on how to disable location tracking, configure your privacy settings, the software you shouldn’t download, URLs and sites you should be wary of, but how many people today are basically smart-phone versions of 12 O’clock flashers?

How does a lack of explainability affect human decisions?

There are many initiatives aimed at making AI internals more transparent so systems can rationalize their decisions. While an excellent objective, it would be idealistic to assume it should be available for everything. Think of a world expert in something. You could ask her to explain why she chose to take a specific decision and she might have an answer. She might also just say that it felt right because in her experience that’s been the case 9 times out of 10, and this situation seems similar. If transparency isn’t built into a system and it can’t explain how it arrived at its decision, should that decision be ignored? I would argue that the decision is not necessarily a bad one, but in specific instances, a human could either validate it through a more detailed look at the decision and the context, or the person might come up with the same gut feeling and make a call to follow it. Either way, the onus of responsibility falls back onto a human. It’s only through this sort of oversight and subtly changing our approval process, that we start to make human oversight more pervasive.

Roles and responsibilities

If AI is being embedded into systems that we use on a regular basis, then consider governments, society, corporates, and the individual. Who bears what responsibility for how AI is used and the resultant effects? There will be overlap because each of these parties cooperates with the others.

Large technical companies have come in for a lot of criticism over the last few years. In some instances, this was entirely deserved. Disingenuously luring people onto a platform in order to misuse their personal data for financial gain is both morally and criminally suspect. There have been some very public showcases of this, and the public voted with its feet by leaving those platforms by the millions. On the other hand, many large companies have been very transparent about how they look after the rights of their consumers and make every effort to ring-fence data to comply both with the letter of the law, and the spirit of the agreement. In other words, they genuinely want to do the right thing. Where this is breached, there should be proportionate prosecutions to help drive a culture of institutional responsibility.

However, it is fair to say that public opinion on what an appropriate use of data is, has changed regularly over the last 5–10 years and differs across generations. What was once acceptable, now isn’t. What was once too much too soon, is now a fair price to pay for convenience. The problem is that what defines acceptable, changes too quickly for many companies to respond in a timely fashion. I have seen companies create scary capabilities. When the flaws are highlighted, they build more capabilities to mitigate the risks, and then more and more and so on. These companies are on a treadmill of fixing AI systems, when in fact, a reasonable option might be to use no AI or for that matter, no system.

While governments are in theory accountable to the people and in a position to be removed from power, many large corporates it seems are accountable to almost no-one. Large corporates have a part to play in using AI responsibility, but they do not own the problem as many seem to believe. It’s hubristic at best to think that large corporates as a whole can be trusted to operate purely with the best interest of society at heart.

I would say that the role of organizations is to establish an agreement with no hidden agendas and when it’s clear that the agreement is no longer appropriate, processes should be in place to help leave on good terms. This is harder than it sounds as there may be legal constraints binding the hands of either party.

Where do government and legislation fit here? In theory, the primary aim of government is to protect the rights and security of its people. This often takes the form of education, roads, the military, police, health etc. AI is being embedded in all these areas. Many countries, for example, have published their AI strategies. The government also needs to consider the ethical and practical use of AI by individuals and companies. The legislation is by design a slow and often painful process. This helps reduce knee-jerk laws being passed without due consideration and consultation. The world is moving way faster than the legal and governing system can handle so how does the government effectively deal with AI? In the same way that the European General Data Protection Regulation (GDPR) forced a conscious embedding of privacy and the rights of the individual into a corporate process, semi-autonomous processes are no less impactful. A set of relatively unambiguous criteria should be developed that balances the protection of the individual, society, democracy etc while not overly stifling creativity. China’s recent controversial social-credit score uses gamification and AI to help influence behavior in lines with state guidelines. Transport-related offenses (e.g. smoking in a non-smoking carriage) for example could result in 180-day travel bans. Ignoring the rights or wrongs of such an approach, it is not without its flaws. For example, Dong Mingzhu, the CEO of a large organization was allegedly caught jaywalking by face recognition software and had her picture posted all over a billboard hoping to shame her for her crime. Unfortunately, it was a large picture of Dong plastered on the side of a bus as part of an advertisement that committed the offense.

A final note of danger in over-legislation is the risk that only permittable approaches that were around when the legislation was passed can operate. Novel and innovative approaches would be illegal/constrained, and society would lose out.

Where does the individual fit in here? Frankly, the individual should be front and center. The individual should expect a say in the kinds of processes driven by their data. They should be empowered to choose non-AI processes. If many people refuse to use systems with embedded AI, then the government should honor this where feasible. If the price that the individual pays is slower service or more disjoint information, and they’re aware that his might happen, then this should be an option. It’s no different to any quality of service agreement. If I take a long-haul flight and decide that I will pay no more than £500 to fly to the US, then I can’t really be too disappointed if I don’t get to sit in 1st Class. Individuals affecting processes that work well and those that don’t also help influence the government to focus on processes that matter, and let older dysfunctional ones die off. Individuals also need to take personal responsibility for the services they use in the context of their private life. As I mentioned earlier, our choice to use certain technologies must come with a conscious acknowledgment of our role in, and the cost of using them.

What about society in general? Where does it sit? Society is composed of clusters of individuals and is distinct from a government in the sense that irrespective which party or policy holds power, there is a pervading sense of communal identity. Consider a number of examples where social and government policy diverged in very public ways; the so-called Arab-Spring, the current French Gilet Jaune movement, and almost anything Brexit. The distinction here is important. While governments may come and go, and individuals have a personal opinion, there is a higher order point of view and a set of accepted norms. These must be considered if each group is expected to be held accountable for their actions. Society must also accept its role. How this is enforced is a tricky question, but to ignore it because the problem is hard would be foolish.

Finally, let’s highlight a number of trends that affect the pervasiveness and impact of AI. Some of these are technical, but many aren’t. Each of these could have pages of more detail dedicated to them, but it’s useful to reflect on them briefly:

· AI capability has become pervasive. It’s in our software, our cameras, our cars, our shopping platforms, televisions, phones, traffic systems, job applications, social networks, and computers. It is almost everywhere and there are still plans to increase it and to make it more intelligent. In the same way that some applications are written first and have security added later, AI functionality is being delivered first, and then some of the ethical, societal, architectural, and a security detail is being considered later. Adding hygiene to applications late in the day has been shown to be flawed time and again, but there seems to be little appetite for delivering better things more slowly.

· On-demand instantaneous gratification may have been a historical driving force for vendors. It’s resulted in many simplified processes, a reduced price for services, and a corresponding reduction in quality for many others. Perceiving increased choice can be seductive. AI implicitly provides a more personalized and richer customer experience but thinking fast though often leads to poorer quality decisions.

· Increased digitization of almost everything isn’t just about helping make the interaction of records, transactions, and processes more frictionless. It also has profound social consequences. A number of countries (e.g. Sweden) have aspirations to reduce cash transactions to as near to zero as possible. While this may reduce fraud and crime, it also impacts privacy exposing some individuals to increased danger. With AI driven by data availability, digitizing and integrating such a core data source should ring alarm bells.

· Mass data collection is a natural consequence of increased digitization. While improving aspects of system integration, reducing crime and improving the quality of lives, it has numerous downsides to our privacy, security, and is experienced unequally by different parts of society. Initiatives such as the GDPR help provide a corporate framework for mitigating some of the implications on individuals. However, the horse has bolted, and we are only just working out where the stable door is.

· Democratisation of technology means that a historical need for substantial investment by corporations can now be done with relatively little know-how and finance through using other platforms. Want to set up a personal broadcast channel? YouTube. Want to build a security system that incorporates face recognition, device tracking, behavioral analytics? Use a public cloud on a pay as you go basis, bring in some open-source IoT libraries, and embed a plethora of cloud services by small start-ups. Interested in manufacturing your own components? No problems. Invest in a 3D printer and download any number of open source patterns. Want to sell your goods globally through trusted local distributors? Amazon. There is almost no combination of capability that isn’t readily available at a free or ever reducing cost by building on an existing platform that already delivers at an economy of scale. The major danger of democratized AI is that individuals are rarely as well informed, or as aware of their legal obligations as large corporates are. Ignorance here may be our own worst enemy.

· Cloudification of basic resources and the Gig Economy. For many years, we’ve been paying for water and electricity on a utility pricing basis. Now we’re consuming so much more in a similar manner. Other than technical services accessible through APIs and mobile applications, it is now common to rent a bike for half an hour, or someone’s room for a couple of days. The barrier to providing a service to someone is low and people are demanding to consume (and provide) it in this way. AI services fit nicely into a SaaS and PaaS model, and without proper accountability and responsibility, there is the potential for them to have a disproportional impact on us all.

· Polarisation of opinions and groups. The rise of social media has led to more homogeneous interactions. In other words, echo chambers at scale. A recent study looking at political polarisation in America. Unpleasant as it might be, people have a cognitive predisposition to leaving our manners at home while interacting at a distance (i.e. online). The opportunities and benefits of platforms such as Twitter come with behaviors such as abuse, vitriol, and generally abhorrent actions by individuals who in a face to face meeting would never behave that way. The response from platforms such as Twitter, Google, YouTube, Instagram, and Facebook has been to use AI to censor posts and ultimately ban users. Much of this is done automatically, but there is also a human review process, which is itself biased. You might question why there is a problem with censoring hate-speech. The issue is that much of this isn’t actually hate speech, but unpleasant political opinion. If reviewers consider it hateful then AI systems will learn over time what constitutes hateful text. My key issue with this scenario is that the same companies who make great efforts to highlight their focus on unbiased data are institutionalizing bias and prejudice into their public-facing systems.

A New York Times article describes processes that Facebook introduced to limit offensive speech on its platform. Guidelines were drawn up by young engineers and lawyers, summarise into yes/no rules and then outsourced to call centers and low skilled workers. Firstly, consider the implicit bias of two such narrow demographics. Secondly, and ironically, these low skilled jobs are exactly the kinds of roles targeted for replacement by AI, because this sort of task is easily automated. At the time of this article (Dec 2018), Facebook claimed to have 15,000 people manually reviewing content. They also use lots of AI to do similar tasks related to offensive images and videos.

It has been said that one man’s freedom fighter is another man’s terrorist. In a similar vein, one man’s opinion is another man’s hate speech. The writer Evelyn Hall once described Voltaire’s attitude to a book that he was apparently unimpressed with: “I disapprove of what you say, but I will defend to the death your right to say it”. This describes a willingness to tolerate uncomfortable ideas without a need to support them in any way. AI is unlikely to consider the rights of others when evaluating its hate-speech loss function.

In summary

Over time we’ve become accustomed to increased automation to the point where we are abrogating our need to choose. Choosing requires mental effort, and if a machine can get to know us better over time, then why wouldn’t an informed opinion be welcomed? The thing is that this has started to morph from helpful suggestions to default decisions made on our behalf. Converting as much as possible into an automated decision tree would invariably stifle innovation, introduce unfairness and inequity (at scale), and do so in a way not easily undone as it embeds itself into the very fabric of society.

AI is biased, and it will always be so. This is no different to the bias present in each of us as human beings. What we can do is reflect on the decisions we make and then wholly own those decisions. If society is to make the most of AI, then society must step-up. It has to reflect on the idea that the opportunity presented by AI comes with an obligation to treat it as a complex system. It’s usually wise when dealing with people, to assume that they are capable of being bright, articulate, have their own unique experiences, and various levels of complexity. How logical is it to assume that an artificially intelligent system is very human-like, but then treat it as though it was trivial to understand and master?

I’d like to return to the key theme of this article is one of responsibility. As individuals, we need to acknowledge our role as consumers, developers, legislators and leaders in where AI is heading. Without accepting this, we are all sleepwalking towards a dystopian scenario where AI has a very different impact on our lives than expected.

--

--