Outline

In this lab you will:

  1. Explore the meanings of AI
  2. Explore types of Generative AI
  3. Use a character-based AI model (a Recurrent Neural Network, or RNN) to generate text (based on a “seed” input)

Fork and Clone Lab 6 from the GitLab Repository.

Part 0: Shaders and Framebuffers Recap

First we need to talk about shaders and the issues we can encounter when implementing shaders, and how to debug these problems. This is an open discussion. Raise issues you encountered and how you resolved these (if you were able to resolve them).

Framebuffers. What are they? How do you use them? Why is this important?

Part 1: What even is AI?

Artificial Intelligence is a term that was coined in the 1950s, most often attributed to John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon. This group of computer scientists rejected the alternative directions being explored by the cyberneticists. With corporate backing (IBM and Bell Labs), connected with university research facilities (Dartmouth and MIT), they were successful in generating interest and secured funding for future potential capabilities of artificial intelligence. Today cybernetics is relatively unknown, and everyone talks about AI.

But what is it?

What is Intelligence? What do we mean by “Artificial”?

We will be doing this activity in small groups of 2-3 people.

  • What is your definition of AI?
  • What can AI be used for/what is the purpose of AI?
  • What does AI need?
  • What do you want to learn about AI, and why?

Write down the important ideas you get from others.

After each person in your group has discussed these questions, we will report back to the class.

Your instructor will guide you to answer each question (one group will be selected for each question). The answers will be scribed into our Teams channel: What is AI?.

Part 2: AI and Art

Can AI be used to create Art?

To answer this question we need to think about what Art is, and what we understand AI to be doing when it does something: i.e. intention, agency and autonomy.

But wait: aren’t we using computers to create Art already? What is the difference?

Who is making the Art? Who has the intention, the autonomy, the agency to determine what is created?

OK then: can AI be a tool for artists to use and explore ideas?

We will be doing this activity in small groups of 2-3 people.

  • What is your definition of Art?
  • Can we have AI as artist?
  • What do you need to create Art?
  • What examples of AI creative tools do you know about?

Write down the important ideas you get from others.

After each person in your group has discussed these questions, we will report back to the class.

Your instructor will guide you to answer each question (one group will be selected for each question). The answers will be scribed into our Teams channel: What is AI?.

Part 3: Collections and Loops (recap)

Fork and clone the template repo for this week’s lab.

As a refresher, we’re going to recap some stuff (from last year) with collections and loops.

In the template, there’s an array called ymcaCharacters, and below that (in the draw() loop) there’s a for loop. Your job in Part 1 of this lab is to modify the code in the loop so that it draws the letters “YMCA” on the canvas, each in a different random colour.

There are lots of different ways to do this, but if you’re not sure how to get started here’s some things to think about:

  • inside the loop, what’s the variable whose value is changing each time?
  • how can you use that variable to select the letter you want to draw onto the canvas?

If you managed to do that pretty easily, you could try a couple of variations:

  • can you make the YMCA visuals a bit more… fabulous? E.g. can you make each letter have the same colour for every frame, so that the letters are still all different colours but they’re no longer flashing random colours over time? Or add visual flair in some other way (shaders?)?

  • can you adjust the timing so that instead of drawing the letters all at once, they’re drawn one after the other (just like the song) - and bonus points if you can get the timing right (which will be something like Y...M.C.A.......)

Once you’re done with that, it’s time to move onto the AI part of today’s lab.

Part 4: ml5.js

We will be working with a Javascript library called ml5.js for the rest of the term.

It’s a very approachable way to learn about machine learning and, more importantly, it integrates easily with p5.js. This means we can harness these models in our visuals and music.

As you work through the lab, it’ll be helpful to have the ml5.js references open in a new tab so you can refer to it easily.

Overview of ML models

For the purposes of this workshop, we will think of the machine learning model as a black box (don’t worry, we will peek into the black box—at least a little bit—later in the course). One of the “function”s of a machine learning model is to answer a question e.g. “What is this a picture of?”. The model will then predict an answer based on the information you give it (in this case, the picture) and “knowledge” it has acquired from pictures it has seen in the past.

If we were to create a machine learning model from “scratch”, you would first need to create a hollow black box (I’m stretching the analogy, but stay with me). Then we need to teach the model how to answer a specific question. This process is called training. Training involves giving the model examples of data to inform its prediction. With each example of data, we also provide the correct answer to the question.

After training, the model will have built up a “knowledge representation”. The black box is no longer hollow :-D We can then test the quality of the model’s knowledge representation by providing it more example-data, only this time we withhold the answers and ask our model to predict an answer. This is called testing. The model is only capable of predicting an answer from the pool of answers it was given during training. This means if we only show the model pictures of cats and dogs during training and we tell the model that the correct answers are “cat” and “dog” respectively, regardless of what pictures we show the model during testing, it is only capable of predicting either “cat” or “dog”.

The training and testing phase each have a dataset associated with it. We will return to this idea of training and testing later in the semester, where you will train your own model.

Part 5: Text generation with RNNs

Today’s ml5js AI content is based on the charRNN example. Short for Character Recurrant Neural Networks, charRNNs are AI models which generate text (i.e. sequences of letters or characters). AI systems can input (and output) things in all sorts of different forms, because the fundamentals of how they work is the same. Computers represent things as numbers, then algorithms crunch those numbers to produce new numbers, and so on.

If you’d like to know a bit more about how charRNNs work, then there’s lots of stuff online—perhaps this blog post is a good place to start.

The way that we interact with the charRNN model in ml5js is pretty much the same as the other model’s you’ve seen: you start by creating a model object, and when you want to actually use the model you call a function with some input data and a callback function which will be called when its done.

This time the model setup code (which goes in the setup function) looks like this (assuming you’ve declared your rnn variable and your modelLoaded callback somewhere):

rnn = ml5.charRNN("models/bolano/", modelLoaded);

and the “call the model to process the input” part might look something the code block below. Note: you will have to define the variable generatedText and initialise it to an empty string.

function textGenerationCallback(err, results) {
  // stores the generated text in the generatedText variable 
  // (You will have to define this variable and initialise it to an empty string)
  generatedText = results.sample;
}

function mousePressed() {
  rnn.generate({ seed: "Today the weather is", length: random(10, 40) }, textGenerationCallback);
}

Lastly, we will need to draw our text onto our canvas. To do this, add the code below to your draw loop.

  fill(255);
  text(generatedText, 50, 200);

A couple of things to note with the call to the model (the rnn.generate() function):

  • the first argument is actually an object with both seed and length properties (can you guess what they do? check the API docs if you’re not sure)

And note that the results object has a .sample property which contains the actual “continuation” text based on the prompt.

Your job in this exercise is to make something with this generated text. It could be:

  • a graffiti bot
  • a chatbot
  • a karaoke machine which makes up lyrics on the fly (bonus points if they rhyme or have the right number of syllables)
  • a fictional text message simulator between two different models
  • a game where the model gives you several different sentence fragments and you have to choose the one which makes the most (or least) sense

INSPO: Have a look at these typography posters for some inspiration.

Remember to commit your code and push it up to Gitlab, and log out of the lab computer.

bars search caret-down plus minus arrow-right times arrow-up creative-commons creative-commons-by creative-commons-nc creative-commons-sa