AI and XYZ

Not only here in Stanford, kinda everywhere we now hear various “AI and XYZ” talks, usually equating AI with ML. What I mean here is that many people feel entitled to talk about AI and its effects to humans in the closer and nearer future, but often without understanding the technologies, its limitations. So maybe some background… as for demystifying AI, I find this video (although by DARPA, and not implying any endorsement for military use of AI) very useful. More on the AI hype and managing expectations in this article from New York Times.

In all the hype, do we need to be afraid of AI? I don’t think so – So, what is it that AI can and can’t do? For instance there are strict borders in what ML can do well – recognizing patterns – vs. what it can’t do: recognizing and interpreting new situations that can’t be simulated well because they are unexpected. Note, for instance that particularly, despite recent news headlines, machines can’t yet understand text, or even react properly in autonomous driving in untypical situations in traffic (as a recent accident example shows) which wouldn’t make any difficulties for humans.

In the last two days, I had the opportunity to attend two of the better, very interesting “AI and XYZ” talks here at Stanford. Notably, the term “AI” itself was coined by John McCarthy, a former Stanford professor himself, until his retirement in 2000.

AI and the libraries

Namely, yesterday in the Stanford Library AI Conversations Bryan Catanzaro talked about interesting applications of Deep Learning for libraries, where the limits are, etc. see also this talk by Bryan: http://on-demand-gtc.gputechconf.com/gtcnew/on-demand-gtc.php?submit=&searchByKeyword=S7860

My take-home was: as we know, current ML can only learn patterns, it can do this extremly well, but this has disadvantages, e.g. serentipity is missing. What that means in a nutshell is that search based on current ML- and Beep Learning-based, AI would hardly give surprising new results and new insights, things you hadn’t thought about before – the things that you actually experience when sitting in a seminar with people from a completely different discipline (like me listening to a seminar for librarians ;-)).

These new experiences come from social interactions, exposing yourself to experts in a different domain than your own… now this is not what typical AI/ML applications do or can do at the moment… but could they?

As a thought, let us imagine how can you train agents to be experts in differnt domains and then let them “exchange ideas“; is this the next step of AI to arrive at serendipitous results? Whoever would be interesting in thinking this so far only spark of a crazy thought forward and work with me on it, give me a shout! šŸ™‚ Anyway, in general what I want to say is: one distinct distinguishing factor between human intelligence and AI is – not surprisingly – social interaction, it enables ideas and re-thinking, out of your box, “reflection” andĀ  “inspiration”. This, again, is not what current AI can do: it can recognize what it has seen before, and it can do that in the meantime better than humans in many domains.

Global politics and the AI revolution
From a completely different angle came Allan Dafoe’s talk today was about how politics and governance should react to the dominance of AI in different domains: will there be an AI arms race? Who willĀ  win it, etc.? See also this paper by Allan Dafoe for some background. There was a quite goodĀ  controversial discussion, with discutant Jerry Kaplan from Stanford after the talk, who echoed many of theĀ  concernsĀ  (e.g. that AI is a potentially dangerous weapon in the hand of authoritarian regimes), but also emphasized – in a more positive spin, that – as with all prior technical revolutions – it is here to stay and we shouldn’t be afraid of it, and that the power of AI will balance out.

In the discussion, there were some questions referring to governance that reminded me on Sarah‘s work on IEEE’s P7000 WG (): What are the right means to govern/avoid abuse of AI? Although P7000 is about addressing ethical concerns in [IT] system design, it probably applies well to exhaustive uses of ML/AI.

Apart from model processes like the one defined in P7000, in the discussion after the talk, some alternative governance options for AI were mentioned:

  • govern AI like drugs: that is assess and regularly monitor algorithms “fit for being used on humans” like we do it with drugs.
  • transparency/audatibility … which you could consider a prerequisite of the former.

As a side remark, we work on a framework for transparency/auditability in the context of compliance with privacy and consent regulationsĀ  ourselves in the H2020 prroject SPECIAL project at our institute.

As another side remark, coming from quite different angles, both Allan Dafoe and Jerry Kaplan concluded that AGI (Artificial General Intelligence) is as likely/close as Aliens landing on earth, contact me for details, if interested in that discussion. šŸ˜‰

BTW, if you want to know more, look at this Syllabus of one of Allan Dafoe’s classes, you find some background reading materials. Some of the slides (not all, he presented actually more interesting results) were congruent with these slides.

 

 

 

 

 

 

Leave a comment