Artificial Intelligence (AI) is one of the most transformative forces of our times. While there may be debate whether AI will transform our world in good or evil ways, something we all agree on is that AI would be nothing without big data. Big data and AI are considered two giants. Machine learning is considered as an advanced versión of AI through which smart computers can send or receive data and learn new concepts by analyzing the data without human assistance. The Large Hadron Collider, for example, will generate about 15 petabytes of data per year. That’s nothing compared to what happens when we map a whole brain, which will involve about a million petabytes of data. Astronomy, chemistry, climate studies, genetics, law, materials science, neurobiology, network theory, or particle theory are just a few areas already being transformed by large databases. Now this revolution is coming to the humanities. Google’s massive book program, which has digitized millions of books, has spun off an application that gives researches access to a database of billions of words across several language set and two centuries: “big-and-long data”. Google’s program ‒ Ngram Viewer ‒ does more than provide a unique look at the history of words. It promises to change how historians do their work and to change our picture of history itself. A new kind of scope ‒ big data ‒ is going to change the humanities, transform the social science, and renegotiate the relationship between the world commerce and the “ivory tower. In parallel, cognitive architecture plays a vital role in providing blueprints for building intelligent systems suporting a broad range of capabilities similar to those of humans. Neural network architecture for learning word vectors can train more than 100 billion words in a day. A Neural Machine Translation (NMT) translates between multiple languages, and NMT can also learn to perform implicit bridging between language pair never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. A novel training framework ‒ deep reinforcement learning (RL) to end-to-end learn in a completely ungrounded synthetic world, where the agents communicate via symbols with no pre-specified meanings ‒ for visually-grounded dialog agents showed that two bots invent their own communication protocol without any human supervisión (tabula rasa?). RL agents not only significantly outperform supervised learning agents, but learn to play to each other’s strengths, all the while remaining interpretable to outside humans observers. Bot-talk remembers twins-talk, post-structuralist novel or languages culturally constrained. AI languages can be evolved starting from a natural human language, or can be created ab initio.
Pedro R. García Barreno, M.D., Ph.D., MBA.
de la Real Academia Española
de la Real Academia de Ciencias de España
del Comité Científico de FIDE