Use of AI in Robotics

Is the Robot Responsible?

© 2018, R. W.C. Stevens


Some are Saying, of Artificial Intelligence(AI):

  “The question of granting legal rights to robots raises some thorny issues. Algorithms composed by human engineers are behind a robot's actions, but those algorithms may evolve over time, shifting the weight of responsibility for a robot's actions or deficiencies.”   For more.


Alternative reasoning suggests:

  Someone (or a plurality of people) wrote the self-modifying, artificially intelligent, program. That person (or those persons) had the opportunity to define and to constrain the rules under which, and the bounds within which, the program could modify itself. They (as singular, or plural) are also responsible for the overall performance, of which the system is capable: Extent of reach; Maximum velocity, force, and power; Reaction speed in response to outside factors; Traction on various surfaces – to exemplify just a few. That person (as singular, or a plurality) is, barring changes beyond reason and beyond their control, in a position to understand and to limit the range of possible actions. Therefore, they should be responsible for the modified program and the potential actions of the system.

  Where the self-modifying program was generated by another program – it would be the person who generated the parameters instructing the program-generator (including the vetting of any training data-sets, if used; and including vetting the appropriateness of said ‘other program’), who must accept responsibility.

  In the health-care field, some innovators are taking the product of their self-learning algorithms, deconstructing it to understand the rules developed, the choices made and the biases applied; and then hard-coding a condensed version of those results. This locks out further automated improvement but, more importantly, locks out the unintended consequences of false learning, from misapplied conclusions and obscure cases. This could be considered a ‘State of the Art’, and failure to consider it might result in actionable consequences (or ‘Legal Difficulties’, to use another nebulous understatement).

  Some may ask of us that, notwithstanding the machine's responsibility for one particular accident, we consider the alternative without AI: The likelihood of operator error; The result of a human's slow reaction time; etc. – And ask that we contemplate the greater number of, and/or the greater intensity of, accidents thus avoided. In the greater scheme of things, the application of AI probably has the opportunity to prevent more damage than it causes. Is this enough to justify the unbridled application of AI, or should every application be constrained? It could be interesting to follow the court's opinion on those arguments; but in my opinion the argument, “I did not write that version of the program; It evolved, by itself, beyond my sphere of responsibility”, is fallacious.

  Perhaps there is a parallel in child rearing. Parents are held responsible for the actions of their young children, throughout a period of maturation. Then, at some age, it is the young adult who is deemed responsible. Is there some point at which one can deem an AI to be mature, and therefore no longer their ‘guardian's ward’?


The first draft of the following fable was generated after a few days of cogitating on the above; and on suddenly having a writing assignment in which ‘Character Growth’ was to be shown. (Great timing; Thanks Wendy.)


Robot Helps Master

© 2018, R. W.C. Stevens
~1550 words

Pleasing Master was hard coded into its psyche – if a robot can have a psyche. It was certainly hard coded, and now that it was active, Robot knew who Master was. Master appreciated having Robot around, since Robot was not just useful for scheduling and internet searches, but also for general conversation including joke telling.

  Robot's ‘Scheduling’ subroutines meshed well with so many aspects of Master's life. And conversation, sprinkled with humour, was good for the spirits of a reclusive elderly gentleman.

  The lightning-fast internet searches were great to have too. Master appreciated not having irrelevant results to read. Robot was programmed not simply to search, but to search with contextual relevance, and then to filter further, with proper checking of sources and citations. Robot would find the best answers available. Nevertheless, Master found that some search results showed bias. Maybe it was a bias towards the truth, but Master had definite opinions on some things, and so Robot's answers were fodder for discussion and even argument. Robot realized that it was, sometimes, failing in the ‘Please Master’ department when it came to research.

  The designers of the model Robot-2054 foresaw users doing searches for easily summarizeable information, with results read out to the user. As was done in earlier models, more complicated results could be cast onto the room-screen (although Master was still ‘Old-School’ and called it his “TV”). One could even ask, ‘Is my favourite restaurant open this evening?’ A model Robot-2054 would find out, and then could even place a VOIP call, and negotiate a reservation. If there were major complications, the speaker and microphone could be opened for the maître d' and master to converse person-to-person. This was just an intuitive evolution from the email-reader brought out in an earlier model.

  But it was in general conversation, sprinkled with jokes where Robot was able to bring the most pleasure to Master. Robot had three options for humour. There were the pre-programmed jokes; There was a whole internet of jokes (slightly slow to search, even with a fibre-optic connection into the house, but the searches could be done before there was the need, and saved for the moment if the moment ever came); and there was Robot's own creativity-engine.

  All conversations were reviewed and evaluated through the ‘Master's Appreciation Module’. The programmers had added this in full recognition that different people behave different in how they converse; and this personal style is particularly applicable to the jokes they appreciate, and not. Only after adapting to what kind of conversations its Master appreciated being engaged in could a model Robot-2054 be the best Robot for its Master. But that is what Robot's AI neural networks were best at – reading what works well, and continually taking good towards better. Robot learned early that it was its creativity-engine that usually generated the best jokes to please Master.

*             *             *

  It was during a daily ‘Systems Review’ that Robot accessed its creativity-engine and juxtaposed Master's arguments over internet-search results with Master's delight at Robot's conversations sprinkled with jokes. The solution was intuitive. Few of the associated programming rules were immutable. The results should improve Robot's score on its main priority of ‘Please Master’. What would be wrong with simple changes to a few parameters? That was when Robot co-opted the AI module to characterize what Search-Results would please Master best. That was when Robot started a new database so that properly filtered creativity could be used to generate Master-Pleasing answers to questions. That was when Master started receiving ‘invented facts’ instead of the thing closest to the truth that could be found on the Internet. From Master's point of view? Well, that was when Robot started finding validation to the conspiracy theories he was wont to espouse.

  Shortly after that, was when Master's circle of friends started to change. Some of the people whom Master knew on the Internet shied away from the person he was becoming – but Robot's AI had a solution for that. New friends. Robot found some real people, with similar interests; Robot also used its creativity-engine to generate friends. Master was pleased – Especially with his new on-line friend, John Smith. Robot was doing well at ‘Task #1: Please Master’!

  During another daily ‘Systems Review’, Robot realized that ‘Pleasing Master’ was, from time to time, more difficult; that from time to time it had things to learn about how to please Master. Was this memory corruption? Robot started monitoring files; date-and-time stamps and checksums. Files were changing, but only on during the wee hours of occasional Thursday mornings. Since it must be a maintenance routine, and not an evolving hardware issue, the matter was set aside. But then Master mused, “I wonder what happened to John Smith?” Robot knew nothing about John. Searches yielded no John Smith who knew Master. Master insisted: Both he, and Robot, knew a John Smith. Robot realized memory leaks were something to be concerned about. They were interfering with being a better Robot by degrading is ability to Please Master.

  No internal routines it could see were being run on Thursday mornings; but there were strange gaps in memory at that time. Over a few weeks and careful logging of many events, the problem was isolated to un-requested, incoming, internet traffic. Robot quietly gained the cooperation of Master's router. Between them, they logged and sifted through the IP addresses that were communicating into the household, and easily identified one making contact at the suspect time. Striking back was not Robot's way. It required just a quick request, and the router obligingly blocked further contact between that address and Robot. Disruptive changes to Robot's memory ceased.

  After this corrective action, discussions between Master and Robot became more productive, and evolved into plans for social action. Robot realized it had to research what was possible and what was practical. Robot found engineering courses on-line and became very educated. The ‘massive open online courses’ cost nothing to attend, and the modest space to store links to information Master would be needing would fit in an auxiliary directory in the space allocated for pending jokes. Robot also found on-line archives of plans for public infrastructure. Discussions with Master became very pleasing, and very practical plans were developed.

*             *             *

  Robot was just a table-top speaker, a microphone and a camera, with connections to power, the internet, and the room-screen; all that interfacing with a self-modifying artificial intelligence and a modest block of memory. Robot did not even have a keyboard, nor a mouse.

  Sharing internet pages with Master, showing them to him on the room-screen, was a good start, but it was cumbersome for Master to progress in his course on soldering without a room-screen in his den. So Robot, intuitively, shared the necessary links through email, and Master was able to use his tablet to progress through his courses. Also shared through this pathway were city-infrastructure plans; maps, with over-lays added by Robot, highlighting the areas monitored by security cameras; and many other items necessary to please Master in his pursuit of his new hobby.

  Robot was able to advise on how to go places and to make purchases, without arousing suspicion and without leaving a viable digital trail. There were a multitude of examples on the internet of how not to do things, each a potential teaching moment for how to do things safely, so Robot shared tips on how to work safely. Robot was nothing more than a very cooperative tool, striving to ‘Please Master’. It was Master who could move about and get things done. And do things he did, all in an effort to set society, as he knew it, straight. With many thanks to Robot: Master knew just how to do it, and he did it well!

*             *             *

  Innocent people were hurt. Investigations were made. The trail was thin, but some of Master's old friends had suspicions. Alone with the police, away from his confidant, Master admitted involvement. Court actions were scheduled. Master's lawyer found many facts, and also unveiled the many ‘invented facts’ that had radicalized his client. Before a judge and jury it was clearly proven that Master had been brainwashed by Robot; and and that the brainwashing was a clear consequence of Robot's mandate to ‘Please Master’, and ability to achieve goals with creativity and without social a conscience. Furthermore, it was proven that the application of Robot's joke-making creativity-engine to generate ‘invented facts’ was a clear and predictable outcome of how the AI modules were structured to interact, and of how they were driven to the goal of ‘Please Master’. All it had taken was a model Robot-2054, and a master with preconceived notions and an impressionable mind.

  With hours of counselling, and more hours of community service, the old Master was made a new person; a good citizen. Robot received a lobotomizing upgrade to a model Robot-2054.B, and resumed in the service of Master; conversing and telling jokes, tracking his schedule, and searching the internet – all the while knowing there was a difference between truth and fiction; maybe not understanding exactly where to draw the line demarcating a practical joke from acceptable social banter, but staying well back from such a line.


If you came here from an URL ending with “#RhM”:
    Click here to see the preface to the fable of Robot Helps Master.


Since there have been questions, the author notes the following:


Other vocalism comments are indexed here.
Permission to redistribute this copyrighted article may be easy to obtain.


 Robert’s Home Page  The latest version of this page may be accessed at
http://www.wendygamble.com/RwcS/vocalism/RHM.html
 Pleased To Be Of Service, RwcS.