Even though the referred article seems to have been posted close to when the question was asked (i.e. ~2013), the article itself seems extremely outdated even for that time.
When a disk is formatted, tracks are defined
This is only true once in a modern HDD's lifetime (i.e. newer than about 30 years), when it's formatted at the factory and all its tracks and sectors are defined. What regular people know as formatting, as done via Windows or any other OS, is just a logical reorganization done at the partition and file system level. This kind of formatting does NOT redefine/rewrite physical/hardware tracks and sectors. These hardware operations are normally no longer possible (or needed) in modern drives once they leave the factory.
A block, on the other hand, is a group of sectors that the operating system can address (point to)
Correct. All generally available modern storage devices use a fixed-block architecture, meaning they make their capacity available as a huge number of small blocks, each with an unique id but all having the same capacity (for historical reasons, the 512 bytes/block size has been around for a long time but since 2010 HDDs have transitioned to a larger size of 4096 bytes/block). There is another key feature these devices have: any read or write against a block is always done against the block as a whole. The operation is never done on just part of a block.
When an OS talks to a storage device, the device only understands the concept of block and nothing else. The scheme used to address blocks is called LBA and it's essentially very simple: A numeric id is assigned to each block, starting from 0 and ending to however many blocks are available - 1. Any read/write operation submitted to the storage device must somehow specify the id(s) of the block(s) to be involved, in addition to whatever data is transferred.
With modern HDDs, practically all internal details about how blocks are mapped to physical structures are hidden from the outside. Inside the HDD, a block does indeed map to at least one sector but you need to check the HDD specs to find out if it's just one or more. For example, 512e HDDs were released post-2010 that internally used 4096 bytes/sector but presented themselves as if they used 512 bytes/sector. These were necessary in legacy computers that couldn't be upgraded to handle 4K HDDs natively for various exotic reasons.
So, the [48-bit] LBA scheme allows directly addressing 2^48 blocks which is a huge number. If the block size is 512 bytes this means LBA can address devices with a max capacity of 2^48 * 512 = 128 PiB (often incorrectly called just PB) = 131072 TiB (often incorrectly called just TB)! With 4096 bytes/sector that capacity shoots up eight fold!!
So why are there blocks. Why doesn't the operating system just point straight to the sectors? Because there are limits to the number of blocks, or drive addresses, that an operating system can address.
Everything said here (as well as after) in the article is outdated by 40+ years. Yes, eons ago there was a need for the OS to manage the relation between a block size and a sector size, to compensate for the limitations described in there but this is no longer the case because, as already said:
- Modern HDDs don't allow anyone to look inside and see/set what happens behind each block and
- LBA is very generous and allows directly addressing a gigantic numbers of blocks, far beyond what's going to be found inside any generally available HDD anytime soon.
However, those early days had consequences on how file systems are built. A layer of abstraction is presented by the file system to users, such that the file system presents its own allocation unit (i.e. a minimum "block" that must be used when you want to store anything) whose size can differ from the size of the blocks used by the storage devices backing said file system behind the scenes. In case of NTFS, the allocation unit is also known as cluster size.
In general, using an allocation unit that's different from default (which in turn may or may NOT be identical to the block size of the underlying storage device(s)) is something to be done only when you expertly know why. For example, a 4K allocation unit (cluster size) may be beneficial even with HDDs presenting 512 bytes/block when the HDD is used in certain patterns but not in others.