"this seems to be only a difference in syntax style" is a common mistake.
The essential difference is found in the definition of "object oriented".
The language is "oriented" toward objects in that communication occurs between objects by way of messages sent between them.
And what is an object? Everything.
What does this mean?
It means that everything that can receive a message is an object, everything that can send a message is an object, and everything that can be sent as a parameter is an object. So when a method is dispatched, the parameters are always objects.
In implementation terms, an object is either a literal value or a reference to an instance of a class. Which means the parameter, as a single word on the stack, tells the receiver everything it needs to know.
In C++, or Java, or many other "Object" languages, the word on the stack is also just some bits--it might be a primitive value, or it might be an address pointing at memory, but you cannot tell by looking at it. It is not an object, it is unknown bits. One must tell the compiler what kind of thing is on the stack (a primitive or a reference) and if it is a reference, one must also tell the compiler what 'type' defines the layout of the thing the reference is pointing to. Whereas if the parameter was known to be an object, and the language was oriented toward objects, the bits would tell you what 'the object' is, and if it is a reference, the object itself would know what 'type' it is, so it would be sufficient to send just 'the object' to the receiver.
This has consequences.
For example, when you put an actual object into a collection and take it back out, you do not have to know what 'type' of variable you are storing it into. That information is known to the object, not the variable.
'Object' languages must communicate both the address and the type every time they 'send a message'. Object-oriented languages only need to communicate in terms of objects, because the objects know their own type. When the receiver needs to know the type of object it received as a parameter, it can ask the received object--by sending a message to it.
There is no reason to tie oneself to the drudgery of redundantly specifying the type at both ends of every interaction if the objects already know this. An object can be sent from one place to another and to another and to another and only the final destination actually needs to know what 'type' it is.
Think about that. All the infrastructure that is just routing things from place to place can be shared. You can change your mind as to what 'type' you will use to implement an entity, and most of the code you've already written does not have to change. The entity still gets passed to the bottom of the call chain, and a result is still returned. So maybe you change the method at the top and bottom to work with the new type--but nothing else has to change. One can often formulate an entire solution, and write 90% of the code, and test the majority of it, before finally deciding on the 'type' of the main object.
So no. It is not just a difference in syntax.
+
are messages sent to number instances (not primitives, but objects) with the second number as the argument. Smalltalk's syntax is simply, "object message" with the result always being an object (so you can send another message to the result, etc.). Some symbols have special meaning; see wiki.c2.com/?SmalltalkSyntaxInaPostcard :-) – Alastair