Regulating AI technology has proved to be highly challenging for a variety of reasons including the sheer speed of its development, differing viewpoints within the tech stakeholder ecosystem and a notable non-uniformity in its deployment context, owing to the immense scope of its applicability. Further, the lag between the necessarily slow process of quantifying the effects of AI technology, on which regulatory principles may be based, and the pace of innovation, can result in regulatory measures becoming obsolete.

How then may one proceed? Recent developments in the field of intelligent prosthetics suggest a few directions. Research into prosthetic devices has tended to focus on assistive applications for the differently abled with the introduction of intelligent components into such devices greatly widening the beneficial reach of such technology. For instance, sensory-neural controls have enabled closer-to-normal limb function in individuals who have suffered serious injuries. From the humble hand operated wheelchair to text typing enabled by eye movements, the leaps in assistive technology have been impressive.

Brain Machine Interfaces (BMI) or Brain Computer Interfaces (BCI) evolved in a short span of five years from allowing paralyzed individuals to browse the Internet with their minds (in 2015), albeit requiring them to be connected to a computer inside a lab, to doing so wirelessly from their homes, in early 2020 [1]. Wireless transmission of data from subjects to the AI support system and the required speed of transmission are extremely high and equivalent to 48 high-definition videos being streamed simultaneously, on a laptop, with a delay of less than 100 milliseconds [2]. Efforts are now on to create low power versions of such BCI.

At the bleeding edge of BCI tech is its application for augmentative purposes, moving beyond functional assistance. Humans now have the opportunity to expand their range of activities with the help of an additional robotic digit or limb. Surgeons, gamers, construction workers, football players, and guitarists could, in the future, augment their performance and enhance their efficiency with the help of such BCI devices. However, the blurring of lines between “robot assisted’, “robotic” and “human” is likely to have significant health and safety ramifications for users and those in their vicinity.

What happens when a robotic finger malfunctions during a surgical procedure or an intelligent arm attachment fails to respond in the middle of a fire rescue? Where would culpability lie — with the researchers, the manufacturers or the user? As obtuse as the problem seems, the solution could lie in plain sight within the regulatory pages of the department of motor vehicles.

Responsibility for a motor accident has always been placed squarely on the driver. Malfunctions, too, have traditionally been treated as cases of personal negligence of the upkeep of the vehicle. Might robotic appendages be treated similarly?

One reason why it has been possible to treat vehicle operators as the main cause of vehicular mishaps is the rigorous testing these machines are put to by their manufacturers, not to mention

the unshakeable laws of thermodynamics and the principles of combustion engines on the basis of which these machines have been designed.

It appears therefore that what is required for ease of regulation of robotic limbs is a) a high level of testing before release to the user and b) an even higher level of testing into the neural and physical impact of such devices on the user, in the short and long term. For instance, would the user accustomed to a powerful third arm come to subconsciously expect a similar level and type of physical activity from non-users, causing them to incorrectly calibrate the latter’s movements? Calibration errors could have very serious consequences during teamed efforts such as the co-piloting of an airplane or the expected reflexive response while avoiding an object in motion.

Holding the augmented human responsible for misuse or malfunction would make them not only more discerning in their selection of the brand offering these devices but in turn motivate manufactures, eager to improve sales, to be more conscientious about ensuring the safety of their products.

Laws governing the use of augmentation oriented BCI which place the bulk of the responsibility on the user could conceivably end up inducing self-regulation along the profit-based supplier- manufacturer-developer chain. Should this expedient prove satisfactory, the regulation of other categories of AI technology could become more tractable.

The use of BCI to offset functional losses arising from illness or injury is a promising line of research. However it is not entirely clear where the benefit of augmentation lies when one considers a parallel line of development into fully automated systems designed to relieve humans from drudgery and danger such as robotic industrial production and rescue missions or those intended to surpass human speed and precision such as self-driving cars, robotic surgeries and AlphaGO (the GO playing AI who recently defeated the world’s highest ranked GO player Lee Sedol). The future, human or AI, remains an open book in a foreign language.

Views expressed by the author are personal and need not reflect or represent the views of the Centre for Public Policy Research.

Avatar photo
+ posts

Dr Monika Krishan's academic background includes a Master’s in Electrical Engineering from the Indian Institute of Science, Bangalore, India and a Ph.D. in Cognitive Psychology from Rutgers University, New Jersey, USA. Her research interests include image processing, psychovisual perception of textures, perception of animacy, goal based inference, perception of uncertainty and invariance detection in visual and non-visual domains. Areas of study also include the impact of artificial intelligence devices on human cognition from the developmental stages of the human brain, through adulthood, all the way through the aging process, and the resulting impact on the socio-cognitive health of society. She has worked on several projects on the cognitive aspects of the use and misuse of technology in social and antisocial contexts at SERC, IISc as well as the development of interactive graphics for Magnetic Resonance Imaging systems at Siemens. She is a member of Ohio University’s Consortium for the Advancement of Cognitive Science. She has offered services at economically challenged schools and hospitals for a number of years and continues to be an active community volunteer in the field of education and mental health

Dr Monika Krishan
Dr Monika Krishan
Dr Monika Krishan's academic background includes a Master’s in Electrical Engineering from the Indian Institute of Science, Bangalore, India and a Ph.D. in Cognitive Psychology from Rutgers University, New Jersey, USA. Her research interests include image processing, psychovisual perception of textures, perception of animacy, goal based inference, perception of uncertainty and invariance detection in visual and non-visual domains. Areas of study also include the impact of artificial intelligence devices on human cognition from the developmental stages of the human brain, through adulthood, all the way through the aging process, and the resulting impact on the socio-cognitive health of society. She has worked on several projects on the cognitive aspects of the use and misuse of technology in social and antisocial contexts at SERC, IISc as well as the development of interactive graphics for Magnetic Resonance Imaging systems at Siemens. She is a member of Ohio University’s Consortium for the Advancement of Cognitive Science. She has offered services at economically challenged schools and hospitals for a number of years and continues to be an active community volunteer in the field of education and mental health

Leave a Reply

Your email address will not be published. Required fields are marked *