Menu
BLOG DE LA FUNDACIÓN PARA LA INVESTIGACIÓN SOBRE EL DERECHO Y LA EMPRESA Y LA FUNDACIÓN GARRIGUES





"Inteligencia Artificial & BIG DATA". Cultura y Lenguaje. (Parte I), por Pedro R. García Barreno


21/10/2019

While there may be debate whether AI will transform our world in good or evil ways, something we all agree on is that AI would be nothing without big data. Big data and AI are considered two giants.


Artificial Intelligence (AI) is one of the most transformative forces of our times. While there may be debate whether AI will transform our world in good or evil ways, something we all agree on is that AI would be nothing without big data. Big data and AI are considered two giants. Machine learning is considered as an advanced versión of AI through which smart computers can send or receive data and learn new concepts by analyzing the data without human assistance.   The Large Hadron Collider, for example, will generate about 15 petabytes of data per year. That’s nothing compared to what happens when we map a whole brain, which will involve about a million petabytes of data. Astronomy, chemistry, climate studies, genetics, law, materials science, neurobi-ology, network theory, or paticle theory are just a few areas already being transformed by large databases. Now this revolution is coming to the humanities. Google’s massive book program, which has digitized millions of books, has spun off an application that gives researches acces to a database of billions of words across several language set and two centuries: “big-and-long data”. Google’s program ‒ Ngram Viewer ‒ does more than provide a unique look at the history of words. It promises to change how historians do their work and to change our picture of history itself. A new kind of scope ‒ big data ‒ is going to change the humanities, transform the social science, and renegociate the relationship between the world commerce and the “ivory tower”. In parallel, cognitive architecture play a vital role in providing blueprints for building intelligent systems suporting a broad range of capabilities similar to those of human. Neural network architecture for learning word vectors can train more than 100 billion words in a day. A Neural Machine Translation (NMT) translates between multiple languages, and NMT can also learn to perform implicit bridging between language pair never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. A novel training framework ‒ deep Reinforcement Learning (RL) to end-to-end learn in a completely ungrounded synthetic world, where the agents communicate via symbols with no pre-specified meanings ‒  for visually-grounded dialog agents showed that two bots invent their own communication protocol without any human supervisión (tabula rasa?). RL agents not only significantly outperform supervised learning agents, but learn to play to each other’s strengths, all the while remaining interpretable to outside human observers. Bot-talk remembers twins-talk, post-structuralist novel or languages culturally constrained.  AI languages can be evolved starting from a natural human language, or can be created ab initio.

Pedro R. García Barreno, M.D., Ph.D., MBA.
de la Real Academia Española
de la Real Academia de Ciencias de España
del Comité Científico de FIDE

Fide




Nuevo comentario:


Blog de la Fundación Fide y la Fundación Garrigues
Fide
La Comisión Ciencia y Derecho está dirigida por Antonio Garrigues Walker, Presidente de la Fundación Garrigues, Cristina Jiménez Savurido, Presidente de Fide, y Pedro García Barreno, Doctor en Medicina y catedrático emérito de la Universidad Complutense.


Introducción del Blog

Este blog recoge la opinión de los profesionales que participan con regularidad en los diálogos Ciencia-Derecho. En ningún caso representa la opinión de la Fundación Fide, la Fundación Garrigues ó de la Comisión Ciencia y Derecho.