Archive for October, 2013



Noam Chomsky on where Artificial Intelligence went wrong

Yarden Katz is a graduate student in the Department of Brain and Cognitive sciences at MIT. Sharing his extended conversation with Noam Chomsky on his critique of artificial intelligence and why it may be headed in the wrong direction. Selected video clips and photographs from the interview.

Last year at MIT symposium on “Brains, Minds and Machines” Noam Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition. Read Noam Chomsky views on AI here 

This critique sparked an elaborate reply to Chomsky from Google’s director of research and noted AI researcher, Peter Norvig, who defended the use of statistical models and argued that AI’s new methods and definition of progress is not far off from what happens in the other sciences.

AI is technology, not science. That’s correct, but one can lead to another.

read more



Nassim Taleb and Daniel Kahneman discusses Antifragility

“Some can be more intelligent than others in a structured environment—in fact school has a selection bias as it favors those quicker in such an environment, and like anything competitive, at the expense of performance outside it. Although I was not yet familiar with gyms, my idea of knowledge was as follows. People who build their strength using these modern expensive gym machines can lift extremely large weights, show great numbers and develop impressive-looking muscles, but fail to lift a stone; they get completely hammered in a street fight by someone trained in more disorderly settings. Their strength is extremely domain-specific and their domain doesn’t exist outside of ludic—extremely organized—constructs. In fact their strength, as with over-specialized athletes, is the result of a deformity. I thought it was the same with people who were selected for trying to get high grades in a small number of subjects rather than follow their curiosity: try taking them slightly away from what they studied and watch their decomposition, loss of confidence, and denial. (Just like corporate executives are selected for their ability to put up with the boredom of meetings, many of these people were selected for their ability to concentrate on boring material.) I’ve debated many economists who claim to specialize in risk and probability: when one takes them slightly outside their narrow focus, but within the discipline of probability, they fall apart, with the disconsolate face of a gym rat in front of a gangster hit man.”
Nassim Taleb read more



Principles of man-machine framework

According to Dan Woods systems (transactional or analytical) will fail if not designed with the view of people and machines working in harmony.  To achieve the practical balance a man-machine framework must be adopted, where man is in charge and the algorithm is an extension. In his conversation with Arnab Gupta, CEO of Opera Solutions, he explores principles of application design using machine learning. Continue to read this interesting piece to learn more about man-machine harmony  here.

The principles behind man-machine framework are as follows:

  • The machine is a prosthetic of the human mind.
  • The computer interface supports the human thought process, not the other way around.
  • The man-machine interface’s purpose is to facilitate frontline productivity for humans in business.
  • The best processes separate tasks into those appropriate for machines and those appropriate for humans.
In the comments section below suggest principles you think should be included in the man-machine framework . read more