Was Peter Rabbit stealing?

The Tale of Peter Rabbit is a children’s book that follows a mischievous young bunny as he sneaks into Mr. McGregor’s garden and eats as many vegetables as he can before being chased by the farmer.  In discussing the story with my young son he raised a very good question.

“Was Peter Rabbit stealing?”

I thought about the question for a minute from two different contexts.  The first was from the context of the actual story.  Peter Rabbit was anthropomorphized to embody human traits and emotions.  Having been humanized, the character Peter Rabbit possessed self-awareness.  His mother Rabbit even scolds him at the end of the story for losing his jacket and his shoes for the second time in two weeks insinuating all animals in the story are conscious of their actions on a human level.  From this context I would answer the question “Yes, Peter Rabbit was stealing”.

However, I chose to answer his question from the point-of-view of the second context.  The context where Peter Rabbit is not anthropomorphized.  He’s just a regular rabbit and therefore representing all rabbits in general.  Knowing that my son and I are thinking/reasoning beings, understanding his developmental thought process at his age and realizing that my answer would most likely be projected by him onto all furry little creatures, I chose to answer from the latter point-of-view.

To summarize, I said something to the effect… “No, he was not stealing.  He was simply doing what rabbits do.  You see, it’s people who put up fences to draw lines around their property.”  We happened to be driving in the car at the time so it was easy to demonstrate my point by focusing his attention out the window.  “Look” (as we drove).  “That property is that person’s.”  Then we passed a fence.  “That property is that person’s.”  We passed another fence.  “And that property is that person’s.”  And another fence.

“Now each of those people might plant gardens in their back yard with vegetables growing in them.  The thing is, the rabbits don’t know whose property is whose.  The rabbits are just being rabbits. They’re hungry so they go out and search for food. They find food in the garden and they eat it.  When they get chased by a human they get scared and run away.  This is not stealing because the rabbit doesn’t know that the food doesn’t belong to him.  The people know this, but they’re the ones who created that environment.”  Think about a squirrel grabbing a nut from the same yard.  The person doesn’t chase the squirrel away for stealing the nut even though the squirrel grabbed it a foot away from where the bunny ate the vegetable.  “No, I don’t think the rabbit was stealing. The rabbit was just doing what rabbits do.”

So what is the point of all this?  I like to apply these types of thought exercises when designing and building The Intent Engine.  For me there are a few things to think about regarding AI/ML/NLP from the perspective of the question being asked about the Peter Rabbit story.

  • Consciousness and Anthropomorphism
    • Animals possess intelligence to differing degrees.  Human intelligence is marked by higher levels of cognitive functions and self-awareness.
    • Anthropomorphism is the attribution of human traits, emotions, or intentions to non-human entities.
    • This article about “The Human vs. Animal Consciousness Debate” is an interesting read.
    • Taking the preceding points into account when building The Intent Engine, I don’t think it’s useful to attempt to significantly anthropomorphize the AI.  I do think it’s useful to apply human traits to the AI for purposes of improving user experience.  In other words, I think the purpose of the AI should begin as a personal executive assistant that makes your life more efficient and grow from that use case.  It should do all the mundane tasks for you and be there when you need it to search, document, and recall information in a more efficient manner than the input/output methods currently in use today.  Of course, it’s not limited to only those functions.  It should absolutely expand from there. We do, however, need focus as we begin.  Personalized, customizable AI for each of us that is built upon a collaborative foundation is where I want to start.  I fully accept anthropomorphization techniques that improve the user experience such as a human sounding voices instead of robotic ones and other similar enhancements.  On the other hand I don’t think we should be attempting to create AI that looks, feels and acts like a human.  Maybe there will be a good reason to do that at some point, but as of now I don’t quite see the utility in that approach.  That approach just seems like we’re trying to trick each other.  Why?  For me, the purpose in building AI is to augment us humans in ways that will reduce the friction when interacting with technology.
  • Developmental levels of learning
    • Children’s brains change shape and size in response to just about every stimuli encountered in the early years of life. Evironments, experiences and interactions all affect the way a child’s brain gets wired.  That’s not an overnight process. It takes years. Its what is known as our formative years. I think AI should do the same model.  It should start out like a baby without a clue about the world around it as it slowly begins to process input and state.  I don’t think we should rush to train models while tweaking features to get higher accuracy scores without simultaneously keeping in mind the bigger picture.  I don’t think AI should or will become self aware, but it should and could still learn and be adaptive.  As I mentioned earlier I think AI’s capacity to augment humans will be its best implementation.  Just as a parent teaches a child in the vast world in which we all live, each one of us should have our own AI to train and to teach.  All of these individual AI environments living on top of the same common AI planet.  A planet I call The Intent Engine.  Each AI should be personalized for you and by you and take the time to learn both you and the world in which you and the AI are functioning.  At the same time the AI should leverage the vast amount of community knowledge ready to be shared amongst all other AIs within The Intent Engine.  Of course, you have full control over the data that gets shared, if any, amongst the community.  After all, it’s your data.  Shouldn’t you be in full control of it?
  • Context, Intent and Dialogue
    • This is the hard part when teaching the AI.  The question “Was Peter Rabbit stealing?” seems like a simple one.  However, there’s a lot to think about when formulating an answer.  At the moment, most AI assistants function like search engines.  Just like search engines, they don’t necessarily find the best answer, but they weed out the worst answers and respond with something satisfactory. I think this approach needs a major overhaul to come up to speed with contemporary user requirements.  Again, instead of just responding to questions or searches, let’s reduce the friction involved when interacting with technology.
    • We need to start by creating better dialogue management.  We need interactive AI that gets us past these niche focused question-answer otherwise you get redirected to a representative bots. That’s very hard to do though because so much of what someone could ask the bot could be out-of-scope.  That’s why I think it should take a long time to train your AI.  It’s not going to understand hardly anything at first.  The process could be faster by leveraging the shared community modules and especially by improving the dialogue management with better recall.  Even still, to learn how to learn one can’t be rushed.
    • Lastly, the AI needs to better understand the audience and their intent.  Is the AI conversing with a 20 year old or a 70 year old?  It may need to respond differently depending on many factors, age being only one example.  Being contextually adaptive is an essential feature of The Intent Engine.

 

Leave a Reply