Groh: Field Test

For my field test I created a chat bot that explains drone law. The goal of this messenger bot is to break down the basics of drone law so that novice flyers won’t get in trouble when they fly their drone.

https://www.messenger.com/t/287519685033756/

Drone law is very complicated. There are many rules that most people don’t know exist. Some of these laws are counterintuitive to what most people think the purpose of flying drones is for. The FAA stresses safety as one of its major concerns, which is why people aren’t allowed to fly directly over other, flying at night is prohibited, there is a maximum flight altitude and pilots must keep their drone in their own line of sight. There aren’t places that thoroughly and simply outline these rules, so DroneBot attempts to clear any confusion and prepare pilots to fly safely and legally fly while still having fun.

Creation

I used chatfuel.com to create the chatbot. This website has an easy to use interface. All I have to do is input keywords and its subsequent response.

As far as my experience goes, Chatfuel does not have a limit on the number of answers you can input.

The program also allows you to insert ‘blocks’. These are links that users can interact with. I decided to include a slideshow of five different drones with their price and websites so users can learn more about the specific drone.

I also included blocks that were designed to give the user an idea of what questions to ask. I included three tip boxes that pointed the user in different directions. This also served as a good tool if the user asked a question the bot couldn’t answer because the bot provided questions they could ask that would guarantee a response.

 

In total, I had nearly 80 responses prepared to answer any question a user might ask. Each answer is triggered by at a least a couple of different key words or phrases.

Results

I sent links to friends, family and professionals to test out the bot. I got a lot of feedback from friends and families but very little feedback from professionals.

There were 12 different users who chatted with DroneBot. An interaction is qualified as a question a user asked or any time a user clicks on a block.

Interactions Obscenities/Non Sequiturs Adjusted for Obscenities/Non Sequiturs
User 1 9 0 9
User 2 40 17 23
User 3 10 0 10
User 4 11 7 4
User 5 20 7 13
User 6 8 0 8
User 7 4 0 4
User 8 15 0 15
User 9 12 0 12
User 10 16 3 13
User 11 28 12 16
User 12 10 0 10
Average 15 4

11

 

The questions users asked varied. There wasn’t a common thread between questions.

 

Analysis

I think the number one reason for the lack of engagement with the chat bot was because the topic is too niched. Drone law, as stated in the beginning, is new and unknown to most. Initially, I thought that people would ask a lot of questions because they didn’t know anything about the topic. I learned that people didn’t asked questions because they didn’t know about the topic. Users can’t ask questions when they don’t know where to start. A common response I heard was “I wasn’t sure what else to ask.”

Another reason for the small amount of interactions is because users would be deterred from asking questions if they received the message that the bot didn’t understand the question. Users may have thought that they exhausted all the questions they could ask. Another reason may be that they didn’t know what else to ask. This is directly related to the first reason I gave. If people don’t know what questions to ask in the first place, and a couple of their questions didn’t get responses, they may have been discouraged to continue.

I also noticed that it was common for people to test what kind of response the bot would give to obscenities or non-sequiturs. While not everyone went down that route, those who did generally asked a lot of those questions.

One of the toughest problems I encountered was developing the syntax for what the program recognized to give a response. I realized that everyone asks the same question in slightly different ways. Users would ask “where is the best place to fly” and “where can I fly” but only the question with the most similar keywords would get a response. Another example of this concerns when people should fly: “when can I fly”, “what’s the best time to fly”, “when should I fly”. This was the case for many questions. To account for this, I attempted to come up with every variation of a sentence I could think of. However, as is typical with these sorts of things, there is always a way to ask a question that I didn’t anticipate. As this happened, I would update the bot as it was stumped. This quickly became the most common maintenance I would do.

Since I knew people would get run out of questions, I tried to prompt users with questions or direct them down certain avenues. This quickly failed. I learned that people didn’t follow my direction. Either they had their own agenda of questions in mind or they didn’t read the entire response. Most users failed to ask the follow up questions I provided. However, I did have some success doing this. Some users did read the entire response and followed the DroneBot’s directions. Users who did this asked on average four to six more questions. 

Conclusions

Chat bots take a lot of trial and error. Since there are so many ways to ask the same thing, there is a lot of room for miscommunication. A chat bot is only successful if it can answer any question or guide its users in a certain direction. Lots of data is necessary to do this and that data comes from field tests like this one.

Also, messenger bots that deal with niche subjects aren’t ideal. People don’t know what to ask or how to interact with it. The result is little user engagement.

With all this being said, I believe my bot was successful. I conducted a field test, acquired data and adjusted my bot’s answers as users asked new questions. With over 80 responses prepared, users who thoroughly engage with DroneBot will come away with enough of an understanding of drone law to avoid most legal issues when flying.