There is no optimal size for objects in the object store, in fact this flexibility is one of the big benefits over fixed-size block stores. Typically an application will use this flexibility to decompose its data models along convenient boundaries. That said, if you are storing very small or very large objects, you should take into account some considerations.
Is there a problem with a large number of small objects?
There has never been a functional problem with small objects, though in the past it has been inefficient due to the way that objects are stored. However, in the next release of Ceph (Firefly) there is a way to use LevelDB as a backend, making small objects much more efficient.
How big objects can get without making troubles?
Assuming that you are using replication in RADOS (in contrast to the proposed object striping feature and the erasure coding backend) an object is replicated in its entirety to a set of physical storage nodes. Thus, the size of an object has an inherent limitation in size based on the storage capacity of the physical nodes to which the object is replicated.
This mode of operation also alludes to the practical limitation that per-object I/O performance will correspond to the performance of the physical devices (data and journal drives). This means that it is often useful to think of an object as a unit of I/O parallelism, although in practice many objects will map to the same set of devices.
This question will likely have a different answer for the erasure coded backend, and applications can always stripe large datasets across smaller objects.