How I Made an AI Version of Myself

AIDecember 04, 2020
a picture

The title would make it sound like I made a replica of my brain to answer all the questions you may have for me. That is not quite the case, but it's certainly something I'd like to explore, so stay tuned.

What I've actually made is a chatbot, that can answer a ton of different questions with answers written by me. This behaviour sort of mimics being a replica of me, while being something more basic in nature. In this blog post, I'll tell you exactly how I implemented it, and how you can too.

But before that, if you haven't checked out the bot yet, you can't. Sorry! I'll publish it very soon, on my new website for lab experiments.

Creating the chatbot

The chatbot is powered by Amazon Lex, the same technology that powers Alexa. The job of Amazon Lex is to extract intent and data slots from a message or utterance. In other words, it's a Natural Language Processor. It converts what a human might say into machine readable language.

If a human wrote "I would like to place an order for 5 pallets of toilet paper." Amazon Lex would likely extract this data:

{
    "intent": "PLACE_ORDER",
    "slots": {
        "PRODUCT": "TOILET_PAPER",
        "QUANTITY": 5,
        "QUANTITY_UNIT": "PALLET"
    }
}

The above example, is not indicative of an actual Amazon Lex response, but it would be similar.

Technologies like Amazon Lex are typically used for chatbots that can place orders, check order status, do basic customer support and things like that. But at the core of all these technologies, all they do is extract intent, and create a response. We can use this to extract what is being asked, and answer it from a list of pre-written answers.

Alternatives to Amazon Lex

Other options include IBM Watson Assistant, Rasa (which is Open Source), Google Dialogflow, and many more.

I chose Amazon Lex, because of the fairly cheap pricing and seemingly unlimited intents. IBM Watson Assistant limits the amount of intents you can have on the free plan.

Creating intents

In this context, an intent is a possible question the user may ask.

For each intent, you must give examples of how the question might be asked. This is so the NLP model can be trained properly. The model will be able to understand variations of the question, even if you didn't enter them as an example.

Example of an intent

Let's create an intent that can understand when the user asks for my favourite colour.

Question: What's your favourite colour?
Possible variations:

  • Which colour do you like the most?
  • Which colour is the best in your opinion?
  • Best colour?
  • Prettiest colour?
  • Favourite colour?

You do not need to include variations with the regional spellings of a word (color or colour). This is handled by the model.

Connecting to the chatbot

Now that we have a working chatbot, we need to create a way for users to chat with it. I have kept instructions quite vague until now, because I want to share the theory more than the technical setup. I will now be sharing code examples of how to communicate with Amazon Lex using Node.js.

I am using Node.js because my website is hosted by Vercel, and they make serverless functions using Node.js very smooth. They also support other languages, but I found Node.js to be the easiest to work with.

Connecting to the Lex Runtime API

Before we do anything, we have to connect to the Lex Runtime API. Here's how you do it in Node.js.

var AWS = require('aws-sdk');

AWS.config.update(
    {
        accessKeyId: process.env.AWS_ID,
        secretAccessKey: process.env.AWS_SECRET,
        region: "eu-west-2"
    }
);

var lexruntime = new AWS.LexRuntime({ apiVersion: '2016-11-28' });

The AWS SDK will get its credentials from ~/.aws/credentials, if that doesn't exist, it will look for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from your environment variables. On Vercel those variables are reserved. To fix this I picked two different names, and set them manually.

If you are not using Vercel and those environment variables are available for you to use, you only have to set the region in AWS.config.update, as the credentials will be fetched automatically.

Creating a session

To send a message to the chatbot, you will need to create a session. To do that, you need your bot name, bot alias (which you get after publishing the bot), and a user ID.

I decided to randomly generate a user ID and return it, so the client can send it along with messages. This is fine, since no sensitive data is being exchanged.

Generate a random ID

var userId = Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15);

Create the session

lexruntime.putSession(
    {
        botAlias: 'Simse',
        botName: 'SimonClone',
        userId: userId,
        accept: 'text/plain; charset=utf-8'
    }, (error, data) => {
        if (error) {
            // Something went wrong
        } else {
            // It went well
        }
    }
)

Putting it all together

var AWS = require('aws-sdk');

AWS.config.update(
    {
        accessKeyId: process.env.AWS_ID,
        secretAccessKey: process.env.AWS_SECRET,
        region: "eu-west-2"
    }
);

var lexruntime = new AWS.LexRuntime({ apiVersion: '2016-11-28' });


module.exports = (req, res) => {
    var userId = Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15);

    lexruntime.putSession(
        {
            botAlias: 'Simse',
            botName: 'SimonClone',
            userId: userId,
            accept: 'text/plain; charset=utf-8'
        }, (error, data) => {
            if (error) {
                res.status(500).send({
                    status: "ERROR",
                    error: error
                })
            } else {
                res.status(200).send({
                    status: "OK",
                    userId: userId
                })
            }
        }
    )
}

Now you have a session. The session will automatically expire according to your settings in the Amazon Lex dashboard. So you don't have to worry about deleting the session when you're done, although you can if you want to.

Sending a message

First we need to get the user ID and message from the URL request parameters. You can do this like so:

const { userId, message } = req.query

This would extract the required information from a URL like: /chat/message?userId={userId}&message={message}.

The final function

var AWS = require('aws-sdk');

AWS.config.update(
    {
        accessKeyId: process.env.AWS_ID,
        secretAccessKey: process.env.AWS_SECRET,
        region: "eu-west-2"
    }
);

var lexruntime = new AWS.LexRuntime({apiVersion: '2016-11-28'});


module.exports = (req, res) => {
    const { userId, message } = req.query

    lexruntime.postText({
        botAlias: 'Simse',
        botName: 'SimonClone',
        inputText: message,
        userId: userId
    }, (error, data) => {
        if (error) {
            res.status(500).send({
                status: "ERROR",
                error: error
            })
        } else {
            res.status(200).send({
                status: "OK",
                response: data.message,
                intent: data.intentName
            })
        }
    })
}

That's it. If you're using Vercel, you can copy these completed functions and put them in your api folder and call them something like session.js and message.js.

In the future I'd like to create a package that can simplify this process even further, and share a post on how to use it, on services other than Vercel. If you have any clarifying questions, feel free to email me at [email protected].

Thanks for reading :)

One Backup is No Backup, Two is Good and Three is Great
Chrome OS Is Insanely Powerful