Know the answer and are only reading this out of some sort of strange social obligation like what surely at least 85% of my audience probably is?
Well, then, read on, because I'm going to discuss the operation right now!
To understand why division by zero really screws things up, we're going to have to take a quick review of some of the lessons you might have gone over in some of your algebra classes.
First of all, what's division? Division is a basic mathematical operation. Its primary purpose to find a number that represents the number of items that are put into each of a certain number of groups out of a larger "pool". For example, if we have 8 bags of candy and four people to give them out to, we end up with 2 bags of candy per person; 8 divided by 4 is 2. (Sometimes we like to shorten this, and just write it as "8/4=2" for simplicity.) Because of the way division families work, we can also figure out how many people we can give out 2 bags of candy to if we have 8 bags -- this is 8 divided by 2, or 4.
This is due to the commutative property of multiplication, actually. It turns out that the process of division is the inverse operation of multiplication -- which we use to find, given a certain number of groups of something, how much of that thing we have. For example, if I have 3 goldfish each in 5 different fish tanks, I have 15 goldfish because 3 times 5 is 15 (3*5=15). When we say that multiplication is "commutative" (or that the operation "commutes") it means that the two numbers on the left hand side of our mathematical statement can be interchanged to give us the same result -- 3*5=15 and 5*3=15.
You might note that commutativity applies not only to multiplication but also to addition. This might not surprise you if you've realized that multiplication itself is really just adding in groups. So we can represent 3*5 as 3+3+3+3+3. Heck, we could also represent it as 5+5+5. This is a very useful matter to keep in mind, as it makes it easy to clean up messy algebra that we might end up working with in a more complicated setting. The realization of the commutativity is pretty clear when switching, say, the first and second 5s in that last expression gives you the same result that you started with.
Thus, if division tells us the total number of y objects in z groups is x, then multiplication tells us that having x objects each in groups of y gives us z objects. In a symbolic representation, we'll describe this as such: x*y=z can be reversed as z/y=x. Note that since we can switch x and y, then the other two symbolic statements in this "factor family" are y*x=z and z/x=y.
So, what does this have to do with why division by 0 causes errors?
Well, let's have n represent this otherwise undefined result from a division by zero operation. Just to make it easier to follow we'll choose a specific number to divide by zero -- I like 7. Thus we're assuming that 7/0=n.
If we go back to our knowledge that division and multiplication are inverses, we find that the above statement implies that n*0=7. That means that we're adding a number no times (i.e, we're not adding anything at all!) and somehow ending up with 7. Or that I've added nothing n times and ended up with 7. Going back to our conceptual representations, that's like me going with my 1 gallon bucket to the well, coming back with it empty, and somehow having brought 7 gallons of water to my village.
That's why we won't try to shoehorn a definition for division by zero! It doesn't make physical sense! If we allow for it in our modeling of physics we can allow for division by zero, then we're allowing for things like the spontaneous creation of matter and instantaneous teleportation! Considering we don't really see this happening (definitely not on a macroscopic scale, anyway), it seems that treating the result of division by zero as "any old number" is conceptually worthless to us.
If we try to do algebra with it, our equations will also, in a manner of speaking, blow up in our faces -- more precisely we'll lose some information in possible values that our expressions can take. If we divide by x in solving an algebraic statement, we're implicitly assuming that x can't be zero. If x can be zero, doing this is going to cause us to end up missing that solution and possibly others. Consider (x-5)*x = 0. If we divide by x then x has to equal 5. But x could be zero, so we miss out on that solution.
But I know what someone out there is saying -- what if instead of 7, we use zero? Well, that's a good point -- and it IS a distinct concept. Then we have 0/0=n. If we go and rewrite this, then we have n*0=0. This is valid -- in fact, you'll note that n can be anything, and this will still be valid.
That's actually the problem. We can plug in anything for n and it'll work. Thus 0/0 doesn't give us a single number; it effectively gives us every number. This is bad for algebra. We don't call this "undefined"; since we can define 0/0 to be anything we call it an indeterminate form. (Basically, so-called because we can't determine a single number that 0/0 represents.)
What this means is that from 0/0 you can pretty much get anything. If you're doing some mathematical test and you find a 0/0 as a result, that means that you can't conclude anything about the expression which you're testing, and will need to use another test.
So don't treat x/0 as a normal number!