MS Access databases on slow network: Is it faster to separate back ends?
Asked Answered
P

2

7

I have an Access database containing information about people (employee profiles and related information). The front end has a single console-like interface that modifies one type of data at a time (such as academic degrees from one form, contact information from another). It is currently linked to multiple back ends (one for each type of data, and one for the basic profile information). All files are located on a network share and many of the back ends are encrypted.

The reason I have done that is that I understand that MS Access has to pull the entire database file to the local computer in order to make any queries or updates, then put any changed data back on the network share. My theory is that if a person is changing a telephone number or address (contact information), they would only have to pull/modify/replace the contact information database, rather than pull a single large database containing contact information, projects, degrees, awards, etc. just to change one telephone number, thus reducing the potential for locked databases and network traffic when multiple users are accessing data.

Is this a sane conclusion? Do I misunderstand a great deal? Am I missing something else?

I realize there is the consideration of overhead with each file, but I don't know how great the impact is. If I were to consolidate the back ends, there is also the potential benefit of being able to let Access handle referential integrity for cascading deletes, etc., rather than coding for that...

I'd appreciate any thoughts or (reasonably valid) criticisms.

Priapus answered 29/4, 2011 at 14:7 Comment(1)
Probably not much help, but I switched to MySql. Much easier to work with.Stupe
C
9

This is a common misunderstanding:

MS Access has to pull the entire database file to the local computer in order to make any queries or updates

Consider this query:

SELECT first_name, last_name
FROM Employees
WHERE EmpID = 27;

If EmpID is indexed, the database engine will read just enough of the index to find which table rows match, then read the matching rows. If the index includes a unique constraint (say EmpID is the primary key), the reading will be faster. The database engine doesn't read the entire table, nor even the entire index.

Without an index on EmpID, the engine would do a full table scan of the Employees table --- meaning it would have to read every row from the table to determine which include matching EmpID values.

But either way, the engine doesn't need to read the entire database ... Clients, Inventory, Sales, etc. tables ... it has no reason to read all that data.

You're correct that there is overhead for connections to the back-end database files. The engine must manage a lock file for each database. I don't know the magnitude of that impact. If it were me, I would create a new back-end database and import the tables from the others. Then make a copy of the front-end and re-link to the back-end tables. That would give you the opportunity to examine the performance impact directly.

Seems to me relational integrity should be a strong argument for consolidating the tables into a single back-end.

Regarding locking, you shouldn't ever need to lock the entire back-end database for routine DML (INSERT, UPDATE, DELETE) operations. The database base engine supports more granular locking. Also pessimistic vs. opportunistic locking --- whether the lock occurs once you begin editing a row or is deferred until you save the changed row.

Actually "slow network" could be the biggest concern if slow means a wireless network. Access is only safe on a hard-wired LAN.

Edit: Access is not appropriate for a WAN network environment. See this page by Albert D. Kallal.

Costate answered 29/4, 2011 at 14:51 Comment(6)
I do understand about not reading (if interpreted as somewhere between 'processing' and 'examining') any farther into the index than necessary or any unnecessary columns or rows. However, I would like to ask specifically, does that mean that only the parts of the file you mentioned (index and rows with matching ids) are transferred over the network, and no other data is transferred at all? ----- slow network is VERY large (transnational), congested, and many levels of security checks, but nonetheless wired. ----- locking - I do not actively lock the db, locking is due to encrypted databasesPriapus
Additionally, I'd like to point out that I would prefer to be implementing this functionality in a SharePoint/SQL Server Environment, as I feel a little more comfortable with it (rightly or wrongly). The use of Access Front and Back ends is what I believed to be my only option since I am restricted to workstations with Office and a network share (no web, database servers, etc. to be used, because the paperwork would take too long - did I mention this is a small office in the military?) perhaps "slow" is less accurate than "laggy"Priapus
I updated my answer with a link about Access and WANs. You will have database corruption eventually, if you haven't gotten it already. Regarding how much gets transferred, yes enough of the file must be read to determine the location (in the file) of the system tables, then find the offsets for the data tables you're querying/updating/etc. Also there is caching going on with Windows and/or your Access application instance which affects how much data needs to be read fresh --- but those details are beyond my pay grade. continued ...Costate
And distributing the data among multiple back-end database files will probably require more network data transfer. Really, IMO, you have a bigger concern (Access and WAN) than the differences in amount of network data transfer for multiple vs. single back-end database files.Costate
Just now saw your second comment. Yeah, you're stuck between a rock and a hard place on this one. You're right, a server database is the way to go; it's not appropriate for Access. Since, you're stuck, at least make sure you have an adequate backup plan ... and test it. Maybe you'll get lucky regarding corruption. But I wouldn't wager anything valuable on it.Costate
On the issue of multiple back ends: the metadata that allows Jet/ACE to request just the needed data pages from a database file is stored in a tree within each database. If you have to traverse a different tree for every table (because every table is in its own back end), then you've greatly increased the overhead. With a single back end file with multiple tables, the metadata tree that describes where the index and data pages are stored will be retrieved less often than with multiple files.Milkandwater
R
0

ms access is not good to use in local area network nor wide area network which certainly have lower speed. the solution is to use a client server database such as Ms SQL Server or MySQL. Ms SQL Server is much better than My SQL but it is not free. Consider Ms SQL server for large-scale projects. Again I said MS access is only good for 1 computer not for computer network.

Rack answered 22/8, 2018 at 1:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.