Tech With Tim Logo
Go back

Part 1

Types of Chat Bot's

In the world of machine learning and AI there are many different kinds of chat bots. Some chat bots are virtual assistants, others are just there to talk to, some are customer support agents and you've probably seen some of the ones used by businesses to answer questions. For this tutorial we will be creating a relatively simple chat bot that will be be used to answer frequently asked questions.

Install Packages

Before starting to work on our chatbot we need to download a few python packages. Please note as of writing this these packages will ONLY WORK IN PYTHON 3.6. Hopefully this will be fixed in the future.

We will simply use pip to install the following:

  • numpy
  • nltk
  • tensorflow
  • tflearn

Simply go to CMD and type: pip install "package name". Where you will replace "package_name" with all of the entries listed above.

Training Data

Now it's time to understand what kind of data we will need to provide our chatbot with. Since this is a simple chatbot we don't need to download any massive datasets. We will just use data that we write ourselves. To follow along with the tutorial properly you will need to create a .JSON file that contains the same format as the one seen below. I've called my file "intents.json".

{"intents": [
        {"tag": "greeting",
         "patterns": ["Hi", "How are you", "Is anyone there?", "Hello", "Good day"],
         "responses": ["Hello, thanks for visiting", "Good to see you again", "Hi there, how can I help?"],
         "context_set": ""
        },
        {"tag": "goodbye",
         "patterns": ["Bye", "See you later", "Goodbye"],
         "responses": ["See you later, thanks for visiting", "Have a nice day", "Bye! Come back again soon."]
        },
        {"tag": "thanks",
         "patterns": ["Thanks", "Thank you", "That's helpful"],
         "responses": ["Happy to help!", "Any time!", "My pleasure"]
        },
        {"tag": "hours",
         "patterns": ["What hours are you open?", "What are your hours?", "When are you open?" ],
         "responses": ["We're open every day 9am-9pm", "Our hours are 9am-9pm every day"]
        },
        {"tag": "payments",
         "patterns": ["Do you take credit cards?", "Do you accept Mastercard?", "Are you cash only?" ],
         "responses": ["We accept VISA, Mastercard and AMEX", "We accept most major credit cards"]
        },
        {"tag": "opentoday",
         "patterns": ["Are you open today?", "When do you open today?", "What are your hours today?"],
         "responses": ["We're open every day from 9am-9pm", "Our hours are 9am-9pm every day"]
        }
   ]
}

What we are doing with the JSON file is creating a bunch of messages that the user is likely to type in and mapping them to a group of appropriate responses. The tag on each dictionary in the file indicates the group that each message belongs too. With this data we will train a neural network to take a sentence of words and classify it as one of the tags in our file. Then we can simply take a response from those groups and display that to the user. The more tags, responses, and patterns you provide to the chatbot the better and more complex it will be.

Loading our JSON Data

We will start by importing some modules and loading in our json data. Make sure that your .json file is in the same directory as your python script!

import nltk
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()

import numpy
import tflearn
import tensorflow
import random

import json
with open('intents.json') as file:
    data = json.load(file)

Now our json data will be stored in the variable data.

Extracting Data

Now its time to take out the data we want from our JSON file. We need all of the patterns and which class/tag they belong to. We also want a list of all of the unique words in our patterns (we will talk about why later), so lets setup some blank lists to store these values.

words = []
labels = []
docs_x = []
docs_y = []

Now its time to loop through our JSON data and extract the data we want. For each pattern we will turn it into a list of words using nltk.word_tokenizer, rather than having them as strings. We will then add each pattern into our docs_x list and its associated tag into the docs_y list.

for intent in data['intents']:
    for pattern in intent['patterns']:
        wrds = nltk.word_tokenize(pattern)
        words.extend(wrds)
        docs_x.append(wrds)
        docs_y.append(intent["tag"])
        
    if intent['tag'] not in labels:
        labels.append(intent['tag'])

In the next tutorial we will do some preprocessing of this data and get it ready to feed to our neural network.

Full Code

Note: This code is slightly different than shown in the video. This is because there were a few mistakes in the video I've fixed here.

Click Here To Download the JSON File

import nltk
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()

import numpy
import tflearn
import tensorflow
import random

import json
with open('intents.json') as file:
    data = json.load(file)


words = []
labels = []
docs_x = []
docs_y = []

for intent in data['intents']:
    for pattern in intent['patterns']:
        wrds = nltk.word_tokenize(pattern)
        words.extend(wrds)
        docs_x.append(wrds)
        docs_y.append(intent["tag"])
        
    if intent['tag'] not in labels:
        labels.append(intent['tag'])
Design & Development by Ibezio Logo