I keep the number of mailing lists I monitor pretty small, and generally when I add one, I remove another. Lately one that I’ve got on my list and have been paying attention to is the PUG-IP (Python User Group In Princeton)…and lately there has been a small thread going on about what is object oriented programming (OOP) and how can a beginner understand and actually use it.
I find this a pretty interesting question as I remember struggling with this many moons ago as well (when I was first digging into Java actually)…and even more interesting to me is that, even after all these years of ‘using’ OOP, most of the gurus on the list agree that it’s a complex topic to explain to beginners.
I agree, and I think it’s because of two main reasons:
1. OOP involves a heavy does of theory…and truly understanding theory generally involves a heavy does of experience. That is, you’ve just got to use it, and you’ve got to break it, to understand it. That makes it very hard to learn.
2. The dirty little secret is that I’ve found in practice most OOP that people write is really just functional programming in disguise, and the talented functional programmer can have a pretty full and happy career without ever really understanding (or fully using) OOP. That is, OOP involves a lot of hype.
In fact I will freely admit that my own understanding and use of OOP is pretty basic (so please do take everything I say around the topic with a grain of salt).
Luckily being that this is the internet, and my own blog, I don’t actually have to be good at something to try and explain it to others…so without further ado, here’s my own quick and dirty explanation of OOP for beginners.
OOP is a style of programming in which you attempt to model your code after the way we attempt to explain the real world.
The 3 steps in OOP
1. Define an object
Classes are really just definitions of an object. In plain English it would go something like this:
“Do you know what a glyphon is? No? It’s a round thing that bounces.”
In code we do the same thing when we define a class. We name the class, and we set up functions (or methods) that explain the details of the class. If you look at the group of methods within a given class as a whole, you get a sense of the definition of a class (ie. what a object of this class can do).
2. Create an object
I think this is a step that many beginners don’t really get at first, but once you get the idea that classes are just definitions of what an object is I think it starts to make sense that defining something doesn’t actually mean the 'something’ exists. You’ve got to create an instance of that thing before you can actually use it.
In the real world it’s the difference between talking about something and actually having that something in your hands or in the physical world. In code, it’s the bit where you generally see a statement calling the new or init functions like:
my_glyphon = Glyphon()
3. Use an object
So now that you’ve defined what a Glyphon is, and you’ve actually got one in your hands, you can finally start to 'do’ stuff with it…and that’s where your actual 'program’ does stuff.
In the real world it’s the part where you bounce the glyphon on the floor or throw the glyphon to your friend. In code, it’s the part where you implement your program’s specific business logic.
This is another spot where I think many beginners get depressed. It sure seems like a lot of work to have to define an object, then create the object, just so you can FINALLY start to use the object…and the sad truth is that, yes, in many cases it *is* a lot of work. And that’s also why in many cases, especially in small one-off programs (or poorly designed large programs), OOP doesn’t actually make a lot of sense.
So why use OOP at all? Well, there actually are a few advantages OOP can give you (when done correctly and used in the proper situations). Let’s take a quick look at what some of those advantages are.
The advantages to OOP (in theory)
1. Abstraction and Encapsulation.
OOP allows you to define objects. Once you define something, you can use that definition in as many places as you want, as often as you want, for as much as it makes sense for your programs.
The true power here lies in the fact that 'you’ don’t have to be the one that actually defines an object to be able to use it in your own programs. In fact, you don’t even have to really understand that much about the details of an object to be able to use it.
In the real world, you don’t have to know how a computer works to be able to use one. You just need to know a keyboard and a mouse help you control stuff you see on a screen. In code, you just need to know some basics about a class (what methods you can call, what parameters are needed, and what those methods will return) and you can use it.
Most fans of OOP will tout inheritance as the true reason for loving OOP…and honestly I think it *is* pretty awesome (at least in theory). In basic terms, all that inheritance means is that you can use the definition of one class to help define another class.
In the real world we use the definition of one thing to explain another all the time…“Do you know what a gazbot is? No? It’s like a glyphon but it doesn’t bounce”.
In code, there are many language specific differences here, but the basic concept is the same…without having to duplicate (much) code we can define a class as 'like’ another class, but with a specific set of differences (the methods we define in our new class define those differences).
So in our example, when we define the gazbot class, we can say give us everything from the glyphon class, but we will give a new definition for the bounce method (which in our case we will just say, it does nothing as gazbots don’t bounce)
This turns out to be really useful as it can drastically cut down on the amount of code needed in large projects and (when done well) can make the purpose of each object and program very clear and simple.
Of the three advantages often listed with OOP, this is by far the most complex to 'understand’ and I also think the least actually used (or at least, the least used well). But this only makes sense, because generally if it’s hard to say, it’s probably hard to understand and use too.
Anyway - on a very basic level, all that polymorphism means is that you can define something in different ways depending on the properties it’s given.
This is complex in the real world too, but it does exist, especially in something as complex as the English language. There are many words that have multiple meanings in the English language, and it’s only through context that we determine which definition to apply.
In code, it’s actually a little easier than trying to understand English. You simply have methods of the same name, that accept different parameters…then when you call the method, the parameters you pass in determine which definition will actually be applied.
So that’s it in a nutshell.
Again, I’ve only touched on the very basics of what OOP is and how it’s used (and I’m not a guru by any means). Each of the things I mentioned above is really worthy of a much more involved discussion and can probably only be truly understood through real experience. So I encourage you to explore the web for more/better details, and even more importantly I encourage you to just start playing with OOP code. The more you do, the more you’ll start to 'get it’.
In the meantime, I do hope this has helped at least one or two others out there…and if you’ve got corrections, updates, or questions around any of this please do drop me a note in the comments below!
This post has received 41 loves.
Kevin has a day job as CTO of Veritonic and is spending nights & weekends hacking on Share Game Tape. You can also check out some of his open source code on GitHub or connect with him on Twitter @falicon or via email at kevin at falicon.com.
If you have comments, thoughts, or want to respond to something you see here I would encourage you to respond via a post on your own blog (and then let me know about the link via one of the routes mentioned above).