Outline

In this lab you will:

  1. investigate the concept “moral intelligence” and brainstorm ideas for this second portfolio piece
  2. load a pre-trained ml model into p5 and use it to classify images
  3. train your own ml model on a classification task of your choosing

Part 0: What is Moral Intelligence?

What is Artificial Intelligence?

The first question to address for this theme is to have a clear definition of what you mean when you say “Artificial Intelligence”. Kate Crawford in “Atlas of AI” claims that artificial intelligence is neither artificial not intelligent.

do: spend 5 minutes discussing what you think artificial intelligence means, and write down your ideas

What is Artificial Intelligence for? How is it used around the world and throughout business, industry, government, defence, education, healthcare, social media, and embedded into the systems we use?

do: spend 5 minutes investigating and documenting how and where artificial intelligence technologies are used

The use of AI and LLMs can be problematic. There are many examples of AI technologies being used for coercive control, for greed, for environmental destruction, and for killing. There are also errors caused by the use of AI technologies, which result in systemic bias being amplified and misdiagnosis of conditions. The power use of LLMs is increasing (unnecessarily) global electricity and water consumption, contributing to global warming.

do: spend 5 minutes investigating and documenting serious negative impacts of the use of AI and LLMs

What is your moral compass?

What is important to you? What do you value? What do you want and need in your life?

Defining these will help you identify the good things in life. Morality is about identifying the good — that is, the positive aspects of life — and the bad, or negative aspects of life that you wish to avoid. Morality does not exist in a vacuum — we define out morality through our social relationships. We determine right from wrong through learning from parents, elders, siblings, peers, teachers and spiritual guides.

do: spend 5 minutes reflecting on your own needs, wants, goals - identifying the positives and things you reject

A moral intelligence requires the identification of your moral value system. It also requires actions to actively pursue those moral values — we should not act against our moral values, as this can lead to internal conflicts, with negative impacts on your health and wellbeing.

How do AI/ML applications align with your moral compass?

We have identified our moral values, and the potential negative applications of AI.

How do these negative applications align with your own moral compass?

Should we care?

Is research and development in computer science value-free? (have you watched Oppenheimer?)

How can we build systems in which we can ensure that AI will only be used for good?

You may wish to look through these projects:

Can we design AI to act morally? The Distributed AI Research Lab and Critical AI are asking some of these questions. Eryk Salvaggio has explored AI and Images in detail through this course.

do: spend 10 minutes reflecting on ways to build, represent, or raise questions about a moral intelligence in an interactive dynamic artwork

Project Repository

Fork and clone the portfolio repo for the second portfolio piece.

Open assets/portfolio-entry-1.md and answer the questions posed:

  • What does moral intelligence mean to you?
  • Who do you think would benefit from a more moral ai?
  • How might moral ai benefit your target group/person, and how is this better for them?
  • Which value system, view or opinion do you plan to explore in your project?
  • Discuss any examples of media (music, digital art, film, photography, other), phenomena or code art which reminds you of positive aspects of artificial intelligence

Part 1: Image Classifiers

As you work through the lab, it’ll be helpful to have the ml5.js references open in a new tab so you can refer to it easily.

Overview of ML models

For the purposes of this workshop, we will think of the machine learning model as a black box. Machine learning models can be used for classification tasks. The “function” of a classifier model is to answer a question e.g. “What is this a picture of?”. The model will then “predict” an answer (make a classification) based on the information you give it (in this case, the picture) and “knowledge” it has acquired from data it has been trained on (in this case, the data is pictures).

If we were to create a machine learning model from “scratch”, you would first need to create a hollow black box (I’m stretching the analogy, but stay with me). Then we need to teach the model how to answer a specific question. This process is called training. Training involves giving the model examples of data to inform its classification. With each example of data, we also provide the correct answer to the question. This is called supervised learning — the supervision takes the form of providing a category label for each instance of data.

After training, the model will have built up a “knowledge representation”. The black box is no longer hollow :3 We can then test the quality of the model’s knowledge representation by providing it more example-data, only this time we withhold the answers and ask our model to classify the data. This is called testing. The model is only capable of classifying data from the labels it was given during training, and the representative data samples for each label. This means if we only show the model pictures of cats and dogs during training and we tell the model that the only labels in our universe are “cat” and “dog” respectively, regardless of what pictures we show the model during testing, it is only capable of classifying the data as either “cat” or “dog”.

The training and testing phase each have a dataset associated with it. We will return to this idea of training and testing in the second activity, where you will train your own model.

Let’s get started with the lab. Fork and clone the template repo for this week’s lab.

Part 1: Image classification

The model you will be using in this first activity is called an image classifier. Its function is to answer the question “What is this a picture of?”. We will be using a pre-trained model (not hollow) which has been trained on a popular dataset called ImageNet. ImageNet has millions of images; ones that you might easily find with a quick Google search. The pre-trained model we will use is capable of detecting (therefore classifying) 1,000 different subjects in pictures.

We will start by stepping you through the process of loading a model into p5. To do this, we use the imageClassifier() function which takes two arguments. The first argument specifies the name of the model. We will be using ‘MobileNet’ which is optimized to run on mobile phones. The second argument specifies something called a callback function.

Since working with ml models (loading, training and classifying) can involve operations which happen ‘behind the scenes’, getting responses from the model might take a few second longer than you might expect. So when we initialise the model, we give it the name of a function which it will call upon finishing all of these ‘behind the scenes’ operations. In this case, once the model has finished loading, it calls the ‘modelLoaded’ function.

mnet = ml5.imageClassifier('MobileNet', modelLoaded)

Copy the line of code above and paste it at the bottom of your setup() function. You’ll notice that the modelLoaded function is not defined yet. Write a function called “modelLoaded” which takes no arguments and prints “Finished Loading!” to the console. Run your code and check your console to see if the model has loaded.

Now that the model is loaded, we can feed it some images and see which classifications are given.

Open a new tab in your browser and find some example images. You may want to find a range of different subjects - animals are a safe bet. For a full list of the 1000 possible classes, see the assets/imagenet1000_clsidx_to_labels.txt file. For MobileNet, the classifications work best if there is only one subject in the image. Save the images into the images subfolder in your lab template repo.

Once you’ve found some images, we can load them into p5. You might recall that we need to create an image object in order to load images into p5. Here is the p5 reference for loading images if you need a refresher on how to to that.

Load one of the images you’ve chosen and store it in the variable test_image.

The last step is to ask the model to classify the image. To do this, we use the classify() method. The classify() method is associated with the model which we stored in the mnet variable. This means we can call the classify() method using the dot operator . followed by the name of the method i.e. classify().

    mnet.classify(test_image, gotResults)

test_image is the name of the variable which holds the image data. If you have used a different variable name you will need to change test_image in this line of code accordingly.

You might have noticed that the classify() function has a second argument which is also a callback function. The callbakc function gotResults(error, results) is already defined in your template. The method gotResults will, if successful, load the results of the classification to the String variable classification, which will be displayed on the canvas. It will also log each classification to the console.

Add the line of code above to the bottom of the modelLoaded callback function to make a classification.

Why do you think we make the classification inside the modelLoaded callback function instead of the bottom of the setup() function?

Run your code and check the results of your classification in the console.

Part 2: Train your own image classifier

In this section of the lab, we will be using Teachable Machine to train our own image classifier.

Follow the link above to Teachable Machine’s training workspace. You should see four rectangular cells; the left-most cells are for input data (one for each class), centre cell is for the model and the last cell is for the output. The default view only shows input cells for two classes, but you can add more if you’d like.

You can create an image database for each input cell either via file upload or webcam. Sadly, the lab computers do no have webcams, but if you are using a personal laptop with a webcam, you can use the webcam to provide your data.

Before we start training, let’s decide what sort of question we want our classifier to answer. What are the possible answers to this questions (what will the classes be)? Discuss your ideas with the person sitting next to you. Once you’ve decided on the classes your classifier will distinguish between, it’s time to gather some sample data.

Open a new tab in your browser and search for some images for each of the classes in your classification problem. Download the images and save them in a designated folder.

By selecting either the File upload or webcam/microphone options on Teachable Machine, add in one image into each of the input cells. Note that the image files are cropped to a square automatically. Then click train model.

When your model has finished training, you can then test it by uploading a test-image.

think: Can you think of a way to solve this image classification problem using only the programming concepts you have learned so far? Things like if-statements? Discuss this with someone sitting next to you.

Choose a new image which represents one of the classes and upload it as a test-image in the right-most cell. What are the output probabilities for each class? Is it what you expected? Discuss the results with the person next to you.

think: What output-probabilities would you expect if you used an image from your training set as a test-image? Is there a range of output-probabilities which you would consider to be “good”? Discuss this with someone sitting next to you and/or your instructors.

Generally, we are going to get a pretty lousy classifier if we only train our model on one image per class. Let’s go ahead and add a few more examples to each input cell. You can also rename the input cells to something more meaningful than “Class 1” and “Class 2”.

You’ll see an Advanced drop-down menu under the training cell. If you’d like to have a go at fine tuning the training parameters, change some of the parameter values and train your model again. How does it affect the accuracy of the classification? The best way to monitor its accuracy is to navigate to Advanced>under the hood. This will open a panel on the right-hand side of the screen with an accuracy plot. If you have any questions about how these parameters affect training, ask one of your instructors.

When finding images for training, you could just browse the internet and select whatever images take your fancy. How can you effectively select images for your training (and perhaps your testing) examples? If you need some ideas, discuss with the person next to you and or with one of the instructors.

Prep for next week

Next week we will be continuing on the topic of classifiers. In fact we will be using pre-trained classifiers to make some visual art.

If you want to use the model you trained today in next week’s class, click on Export Model at the top of the right-most cell and select Download my model. This will save the model to your downloads folder, but we recommend creating a new folder to house all of the machine learning models you might train for this course, including the one you just downloaded. This will make is easier to access and load up into our js file next week.

You do not have to use the model you trained today for the exercise next week if you’d prefer not to, because we will be providing a pre-trained classifier for you next week as well. If however you want to personalize the classifier, by all means use your own pre-trained classifier; either the one you trained today, or if you want to work on it outside of the workshop, you can follow the steps in this lab and train up any classifier you’d like in time for next week’s class.

Remember to commit your code and push it up to Gitlab, and log out of the lab computer.

Summary

Congratulations! In this lab you:

  • investigated the concept of moral intelligence and brainstormed ideas for your second portfolio piece

  • used a pre-trained image classifier to make classifications

  • gained a stronger intuition about what affects the model’s certainty when classifying

bars search caret-down plus minus arrow-right times arrow-up creative-commons creative-commons-by creative-commons-nc creative-commons-sa