Actually in a fact, I'm not sure you're really asking the right question here.
It sounds to me like you're looking for solution by changing your tools, and not changing your design and your approaches. In a fact, the access jet engine is substantially faster than something like a oracle, or mySQL, or SQL server for most operations. The reason is those other systems are huge mass of server based systems that have socket connections to the server. They have layers of transaction processing. There is probably 500 extra layers of software and systems between you and the actual data that resides on the hard drive.
Contrast that to access which is essentially an in process program (not as running service). You do not connect to Access data files through some TCP/IP connection like you do with server based systems (in fact most of those server based systems force you to connect through and networking layer, even on your local machine and less you use a local memory connection, assuming that option is available).
JET (Access database engine) is not a service, and is simply scrape the file off the hard drive and displays the results. That scraping of the data off the disk drive occurs at the same speed as oracle or SQL server and all of those other systems (we're assuming the same machine and hardware here ). Yet those other systems still have another 500 perhaps even 1000 extra layers of code and Software and Network connections and massive amounts of thngs like user security etc. All of these things substantially slow down that getting to the data on the disk drive by large amounts.
Now course if you talking about a connection over some type of network, then those server based systems are better, because you want all the processing and all that majic to occur BEFORE any data starts to flow down the network pipe.
However in your scenario, the server and the machine are one and the same. Therefore it makes complete sense to eliminate the mass of huge context of thousands of extra layers of software. As I pointed out, and these types of scenarios, jet can be 50% or even double the speed of server based systems like MySql or Oracle.
Access can join, categorized, and total up inventoried for 150,000 records in well under a second, and that with a several table join.
Now on the other hand, in any of these systems, usually the large overhead is, is to open up a connection to a particular table. In fact the time it takes to open a table is about the cost of 30,000 records to transfer. So, this means you want to ensure that your code and use of these tables does not unnecessary open up a new table (especially in some type of code loop. In other words, even in places of repeatedly executing an insert command a SQL, you're far better off to open up a record set, and then do inserts that way, as then you're not using SQL commands anymore, and for each row insert you're not executing a separate parsing of the text in that sql (this can give you about 100 times increase in performance when using access this way – in other words the often quoted advice here is that using SQL commands is faster than opening a record set, is completely incorrect).
What this means is, if your are experiencing some kind of slow down here, I would look at your code and designs, and ensure that record sets and datasets are not being repeatedly opened and closed. You should not be experiencing any kind of noticeable delay in your data operations given the tiny size of the files you mention here.
To be fair, sqlLITE is also (i believe) a in-process non server based edition of MySql, and most of the advantages pointed out above would also apply. But then again, your bottle neck would not be much differnt in each case, and thus we back to desing issues here.
In other words, you're barking up the wrong tree, and a developer who looks for changes in their tools to fix performance is simply looking for a fix by blaming the tools when in most cases the problem lies in the designs adopted.