Once the outputs from the different AIs are gathered and cleaned up, they are shown on two monitors thanks to a patch I coded in VVVV which shows the portraits by uploading the textures and descriptions on the first monitor.
It also turns on the audio of said portrait explanation and recognizes the most important terms to put them in the search engine of one of 9 possible sites (Google, Google Maps, Google Images, Youtube, Instagram, Ebay, Facebook, Amazon and Twitter) and 4 different browsers, shown in a column on the second monitor.

The search term input is possible by manipulating the text string to put in the browser, for example:

 

         https://www.instagram.com/explore/tags/example

Once the outputs from the different AIs are gathered and cleaned up, they are shown on two monitors thanks to a patch I coded in VVVV which shows the portraits by uploading the textures and descriptions on the first monitor.

It also turns on the audio of said portrait explanation and recognizes the most important terms to put them in the search engine of one of 9 possible sites (Google, Google Maps, Google Images, Youtube, Instagram, Ebay, Facebook, Amazon and Twitter) and 4 different browsers, shown in a column on the second monitor.

The search term input is possible by manipulating the text string to put in the browser, for example:

 

          https://www.instagram.com/explore/tags/example

Once the outputs from the different AIs are gathered and cleaned up, they are shown on two monitors thanks to a patch I coded in VVVV which shows the portraits by uploading the textures and descriptions on the first monitor.

It also turns on the audio of said portrait explanation and recognizes the most important terms to put them in the search engine of one of 9 possible sites (Google, Google Maps, Google Images, Youtube, Instagram, Ebay, Facebook, Amazon and Twitter) and 4 different browsers, shown in a column on the second monitor.

The search term input is possible by manipulating the text string to put in the browser, for example:

 

 https://www.instagram.com/explore/tags/example

Once the outputs from the different AIs are gathered and cleaned up, they are shown on two monitors thanks to a patch I coded in VVVV which shows the portraits by uploading the textures and descriptions on the first monitor.
It also turns on the audio of said portrait explanation and recognizes the most important terms to put them in the search engine of one of 9 possible sites (Google, Google Maps, Google Images, Youtube, Instagram, Ebay, Facebook, Amazon and Twitter) and 4 different browsers, shown in a column on the second monitor.

The search term input is possible by manipulating the text string to put in the browser, for example:

 

   https://www.instagram.com/explore/tags/example