Pushbuttons have been working their magic since the late 1800s. From doorbells, elevators, remote controls, flight dashboards and intercoms to Hollywood escape panels and trapdoors, the tiny human-machine interface has been an equal opportunity enabler. And yet when this packet of electrical ingenuity first appeared it was met with widespread anxiety by everyday users as to the dangers of coming into contact with an electrical device. Educators worried that effortless, opaque and therefore unquestioned  use of devices such as the pushbutton would decrease initiative, lead to atrophy of human skills  and relieve users from the need to take responsibility for the inner workings of these devices.
Was the fallout of the pushbutton revolution as dire as predicted? Are we now a race of mindless button pushers? These questions will have to remain unanswered, for now, given the absence of systematic, longitudinal studies on pushbutton induced effects on levels of individual accountability and resourcefulness.
However, what is interesting is the solution to these early tech concerns that was proposed and adopted. In an attempt to allay fears, both of users and educators, each commercial pushbutton package was supplemented with a detailed explanation of its working, with the goal of making it completely transparent to users from all walks of life. School-going children and homemakers were particularly encouraged to understand the circuitry behind the pushbutton, the former viewed as being especially vulnerable to electric shocks, the latter expected to be primary users of appliances employing this interface.
AI technology currently appears to be facing a similar set of concerns relating to safety and transparency, although the perceived dangers are more social, mental, and cognitive, than physical. Racial and gender biases of automated hiring programs, ChatGPT enabled plagiarization, social media addictions, privacy and security threats are some of the issues that developers/inventors, policy makers and consumers are grappling with.
Might the expedient applied to the challenge of popularizing the electric pushbutton be applied in some new avatar to AI tech? Would a similarly detailed explanation, of say the features of deep learning networks and the data used to train such networks, build greater trust between tech companies and their product consumers?
Some of the transparency difficulties arise from the inherent lack of explainability of conventional deep learning systems. Open source solutions could provide an intermediate solution, allowing informed users to improve upon the design of the AI system, along the lines of the LINUX operating system. If this appears too tall an order to place before the consumer, recall the successful effort to educate users about the push button circuit, cutting edge tech for its time. And despite being open source, LINUX has remained commercially viable. Tech companies, for pragmatic reasons, often lag behind the continuously evolving science that enables product development. Open sourcing could, additionally, bring consumers the benefit of the science in real time and arguably strengthen the notion of tech development enabling human advancement.
Reference 1. PLOTNICK, R. (2012). At the Interface: The Case of the Electric Push Button, 1880–1923. Technology and Culture, 53(4), 815–845. http://www.jstor.org/stable/41682743
Views expressed by the author are personal and need not reflect or represent the views of the Centre for Public Policy Research.
Dr Monika Krishan's academic background includes a Master’s in Electrical Engineering from the Indian Institute of Science, Bangalore, India and a Ph.D. in Cognitive Psychology from Rutgers University, New Jersey, USA. Her research interests include image processing, psychovisual perception of textures, perception of animacy, goal based inference, perception of uncertainty and invariance detection in visual and non-visual domains. Areas of study also include the impact of artificial intelligence devices on human cognition from the developmental stages of the human brain, through adulthood, all the way through the aging process, and the resulting impact on the socio-cognitive health of society. She has worked on several projects on the cognitive aspects of the use and misuse of technology in social and antisocial contexts at SERC, IISc as well as the development of interactive graphics for Magnetic Resonance Imaging systems at Siemens. She is a member of Ohio University’s Consortium for the Advancement of Cognitive Science. She has offered services at economically challenged schools and hospitals for a number of years and continues to be an active community volunteer in the field of education and mental health