What are the disadvantages of bit banging SPI/I2C in embedded applications
Asked Answered
W

5

6

I have come to understand that bit banging is horrible practice when it comes to SPI/I2C over GPIO. Why so?

Warford answered 26/12, 2013 at 19:48 Comment(1)
It seems like CPU responsiveness and resources are the main tradeoffs; I am wondering are there any signal transmission quality disadvantages to bit-banging? For example if I am pushing the signal distance limit of I2C/SPI, will a bit-banged implementation perform differently than a typical hardware peripheral?Shoop
D
17

Bit-banging carries a software overhead consuming CPU cycles that you could otherwise utilise for other purposes. This may have a noticeable effect on system responsiveness to other events, and in a hard real-time system, may significantly impact the systems ability to meet real-time deadlines.

If the bit-banged interface is not to have a detrimental effect on real-time performance, then it must be given low priority so will then itself be non-deterministic in terms of data throughput and latency.

The most CPU efficient transfer is achieved by using a hardware interface and DMA transfer to minimise the software overhead. Bit-banging is at the opposite extreme of that.

I would not say it was horrible; if in your application you can achieve responsiveness and real-time constraints and the use of bit-banging perhaps reduces the cost of the part needed or allows you to use existing hardware for example, then it may be entirely justified.

Daman answered 27/12, 2013 at 8:44 Comment(0)
I
4

Bit banging is portable, see the I2C code in the Linux kernel drivers for example. You can get it up and running quickly and it just works. Hardware based solutions generally are not and take a while to get up and running and are limited by the hardware implementation. Not all spi and in particular i2c conform to a standard that can be implemented in a generic hardware solution. You must always be able to fall back on bit banging.

Bit banging consumes more processor resources, that makes it undesirable from that front. It is more portable or can be depending on how it is written, so it is desirable on that front. Hardware SPI/I2C is the opposite of those, takes away some of the cpu overhead, is not portable, is not always flexible enough to handle all peripherals.

As a professional you need to be comfortable with both, just like any other embedded tradeoff you make in your design.

Iamb answered 26/12, 2013 at 22:2 Comment(0)
M
3

I don't know that it's horrible, but if you have SPI or I2C peripherals already available, there's certainly a "don't reinvent the wheel" argument to be made against bit-banging, especially as you might have a bug in your code - for example you might be sampling on the wrong edge of the SPI clock, and depending on the tolerances involved and what hardware you test with, you might not notice it until you're already in production. The wikipedia article also notes that you're using extra processing power and that you're probably going to to introduce jitter into any signals you produce.

All that said, bit-banging is the only choice available if your hardware doesn't have a built-in peripheral, or if it's already used up by other devices, e.g. if the built-in SPI peripheral is used by a high-bandwidth device that you have to continuously communicate with, maybe you bit bang to another SPI device that doesn't need to be so real-time in your application.

Mendacious answered 26/12, 2013 at 20:27 Comment(0)
B
0

It wouldn't be called horrible as such. But yes, when we implement a protocol using bit-banging, it is very much probable that the controller would miss out doing the other more important task because the protocol may consume more CPU time than what a dedicated hardware would consume. SO, is should be avoided in real-time environment or say, time-critical environment.
Along with these, there is one more concern with bit-banging, usually while reading from and/or writing to pin, the signal produced normally has more jitter or glitches, especially if the controller is also executing other tasks while communicating.. If at all it is unavoidable to use bit-banging, then at least try and use them with interrupts instead of polling.

Bucentaur answered 3/4, 2014 at 6:55 Comment(0)
S
0

Bit banging is not quite possible in a non realtime system. And if you put it inside the kernel that is noninterruptable then you really have to make sure that you only bitbang a certain number of bits before rescheduling user processes.

Consider this: you have a scheduling timer running at 1/1000s intervals. When it runs, you check if some process wants to send data over the bitbanged interface and you handle this request. The request requires you to bitbang a byte at 9600 baud (as example). Now you have a problem: it takes 0.8ms to bitbang a byte. You can't really afford this because when the scheduling interrupt runs, it has to do it's job and load the next process that is required to run and then exit. This usually takes much shorter time than 1ms and that 1ms is mostly spent running the user process until the next interrupt. But if you start bitbanging then you mostly spend that ms doing nothing.

One solution to this may be using a timer peripheral just for the bitbanging purpose. This would give a fairly autonomous and interrupt driven bitbanging code that does not have to sit idle at all - but that is only at the expense of using a dedicated timer. If you can affort a dedicated hardware timer, then bitbanging would probably work great. But in general it is very hard to do reliable bitbanging at high speeds in a multitasking environment.

Seaman answered 22/9, 2014 at 7:11 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.