First, Sentinel is not a load balancer or a proxy for Redis.
Second, not all failures are death of the host. Sometimes the server hangs briefly, sometimes a network cable gets unplugged, etc. Because f this, it is not good practice to run Sentinel on the same hosts as your Redis instance. If you're using Sentinel to manage failover, anything less than three sentinels running on nodes other than your Redis master and slave(s) is asking for trouble.
Sentinel uses a quorum mechanism to vote on a failover and slave. With less than two sentinels you run the risk of split brain where two or more Redis servers think they are master.
Imagine the scenario where you run two servers and run sentinel on each. If you lose one you lose reliable failover capability.
Clients only connect to Sentinel to learn the current master connection information. Anytime the client loses connectivity they repeat this process. Sentinel is not a proxy for Redis - commands for Redis go directly to Redis.
The only reliable reason to run Sentinel with less than three sentinels is for service discovery, which means not using it for failover management.
Consider the two host scenario:
Host A: redis master + sentinel 1 (Quorum 1)
Host B: redis slave + sentinel 2 (Quorum 1)
If Host B temporarily loses network connectivity to Host A in this scenario HostB will promote itself to master. Now you have:
Host A: redis master + sentinel 1 (Quorum 1)
Host B: redis master + sentinel 2 (Quorum 1)
Any clients which connect to Sentinel 2 will be told Host B is the master, whereas clients which connect to Sentinel 1 will be told Host A the master (which, if you have your Sentinels behind a load balancer, means half of your clients).
Thus what you need to run to obtain minimum acceptable reliable failover management is:
Host A: Redis master
Host B: Redis Slave
Host C: Sentinel 1
Host D: Sentinel 2
Host E: Sentinel 2
Your clients connect to the sentinels and obtain the current master for the Redis instance (by name), then connect to it. If the master dies the connection should be dropped by the client whereupon the client will/should connect to Sentinel again and get the new information.
How well each client library handles this is dependent on the library.
Ideally Hosts C,D, and E are either on the same hosts where you connect to Redis from (ie. the client host). or represent a good sampling got them. The main thrust here is to ensure you are checking from where you need to connect to Redis from. Failing that place them in the same DC/Rack/Region as the clients.
If you are wanting to have your clients talk to a load balancer try to have your Sentinels on those LB nodes if possible, adding additional non-LB hosts as needed to obtain an odd number of sentinels > 2. An exception to this is if your client hosts are dynamic in that the number of them is inconsistent (they scale up for traffic, down for slow periods, for example). In this scenario you pretty much must run your Sentinels on non-client and non-redis-server hosts.
Note that if you do this you will then need to write a daemon which monitors the Sentinel PUBSUB channel for the master switch event to update the LB -which you must configure to only talk to the current master (never try to talk to both). It is more work to do that but does make use of Sentinel transparent to the client - which only knows to talk to the LB IP/Port.