With latest chromedriver.exe running into out of disk space issues as chromedriver is not deleting the folder named scoped_* at the end of the execution. It is occupying almost 20 GB of space for 400 tests. I tried with both 2.28 and 2.29 versions of chromedriver. I am exiting the driver properly with driver.close() and driver.Quit() too. Chrome browser version is 57.
I managed this by adding deletion of temp folders that begins with "scoped_dir" after quitting driver like:
public static void teardown_()
{
// quit driver
if (driver != null)
driver.Quit();
// delete all "scoped_dir" temp folders
string tempfolder = System.IO.Path.GetTempPath();
string[] tempfiles = Directory.GetDirectories(tempfolder, "scoped_dir*", SearchOption.AllDirectories);
foreach (string tempfile in tempfiles)
{
try
{
System.IO.DirectoryInfo directory = new System.IO.DirectoryInfo(tempfolder);
foreach (System.IO.DirectoryInfo subDirectory in directory.GetDirectories()) subDirectory.Delete(true);
}
catch (Exception ex)
{
writeEx("File '" + tempfile + "' could not be deleted:\r\n" +
"Exception: " + ex.Message + ".");
}
}
}
Hope it helps!
chromedriver.exe
process and cause issues. –
Equal This is a known bug that will be fixed with Chromedriver 2.30 https://bugs.chromium.org/p/chromedriver/issues/detail?id=644
This appears to be a race condition between ChromeDriver and Chrome. ChromeDriver creates these temp directories for use by Chrome, and at the end ChromeDriver tries to delete those directories. ChromeDriver waits for the main Chrome process to terminate before doing the deletion, but some Chrome child processes might still be running and holding on to those directories, causing the deletion to fail. Currently ChromeDriver doesn't retry the deletion.
Deleting the temp files like Daniel mentioned can be a temporary solution but I would remove it as soon as Chromedriver 2.30 is released.
Update
Chromedriver 2.30 is out and should fix this issue.
Update 2
It seems your mileage may vary with this. The release notes for that version listed this as a solved problem back in the day but some people still see the issue. Also this version is extremely old at this point - so while this answer was relevant at the time of post newer versions of the Chromedriver should be used.
scoped_dirXXXX
directories in %TEMP%
using chromedriver 2.30.1
. Will have to go the manual cleanup route posted above...just create 5-6k drivers and you'll see too ;-) –
Equal 2.30
patch notes that said the issue was fixed. Where did you get 2.30.1
? I don't see anywhere that mentions that version being out yet. –
Burton Using latest chromedriver 2.30.1
didn't solve the issue for me - I kept running out of storage in my %TEMP%
directory when running parallel selenium jobs.
The best solution is to control the userDataDir
via Chrome Options and dispose of the directory yourself after you driver.quit()
If your process is synchronous than @cdzar's solution above will work, but for parallel jobs you really need to control the directory create/dispose yourself.
was reported and fixed. Checkout 2.30 or 2.31
UPD. at least works for our grid. If you still have issues it's better you report in into any scope_dir thread on productforums.google.com In addition before it was fixed we used PS script that cleared all files in ..*\AppData\Local\Temp
UPD. check out the chrome browser has completed update process. Along with this fix we had an issue that browser keep in state "restart required for update to complete" even after restart. It's maybe both browser and driver are to be updated for fix to work- can not say for sure.
UPD2. I see some people still have the issue. (maybe they re-released it?) Here the sample of a PS script that was used on Win machine at the time we had the issue. Cleaner.ps1
#infinite loop for calling function
$ScriptPath = $MyInvocation.MyCommand.Definition
# 2030 year error
$timeout = new-timespan -end (get-date -year 2030 -month 1 -day 1)
$sw = [diagnostics.stopwatch]::StartNew()
while ($sw.elapsed -lt $timeout){
if (-Not (test-path $ScriptPath)){
write-host "v been renamed, quiting!"
return
}
start-sleep -seconds 60
# logic
$time=Get-Date
$maxdate = $time.AddMinutes(-120)
Get-WmiObject -Class Win32_UserProfile | Foreach-Object {
$path = $_.LocalPath
if (-Not $path.Contains('Windows')){
echo $path
$Files = Get-ChildItem "$($path)\..\*\AppData\Local\Temp" -recurse | ? {$_.LastWriteTime -lt $maxdate } |
remove-item -force -recurse
echo $Files
}
}
}
run.bat
#PowerShell -Command "Set-ExecutionPolicy Unrestricted" >> "%TEMP%\StartupLog.txt" 2>&1
PowerShell C:\path2CleanerFolder\Cleaner.ps1
GL
This solution works on Selenium 3.141.59.
Before doing a driver.quit()
in your tear-down method, use driver.close()
. Selenium WebDriver will automatically delete the scoped_dir folders it creates during execution.
We are running multiple ChromeDrivers at high concurrency, and I have got a big improvement using Cornel's idea of using adding a driver.close()
before the driver.quit()
in a test. Maybe it gives Chrome a little more time to shut down its processes before the quit, preventing a race/lock condition from happening?
If it turns out we need to do more, I will try coding a similar answer as Daniel has provided, but due to our level of concurrency, I'll attempt to delete the specific folders created by each driver instance.
The directory name can be obtained this way:
Capabilities caps = driver.getCapabilities();
Map<String, String> chromeReturnedCapsMap = (Map<String, String>) caps.getCapability("chrome");
LOG.debug(" Chrome Driver Temp Dir : " + chromeReturnedCapsMap.get("userDataDir"));
This will print something like
Chrome Driver Temp Dir : C:\Users\Metal666\AppData\Local\Temp\scoped_dir35344_14668
However, it appears two directories are created - they differ in name after be last underscore. So for example, the directories might be named:
C:\Users\Metal666\AppData\Local\Temp\scoped_dir35344_14668
C:\Users\Metal666\AppData\Local\Temp\scoped_dir35344_28790
so the code would need to cater for deleting both files.
Tested using Selenium 3.141.59, Chrome 74.0.., ChromeDriver 74.0..
The fix for both synchronous and parallel chromedrivers that is currently working for me, running 5 drivers, python(through pycharm or terminal) is:
First declare add a custom and unique ID for each driver:
options.add_argument('--**foo**')
Also make sure you point it to a HDD drive because of constants rewrites:
options.add_argument("--user-data-dir=**X:/path/to/dir**")
Using psutil to track and kill each driver based on it's unique ID then shutil to delete the folders:
def kill_driver():
while True:
running_processes = []
for proc in psutil.process_iter(['pid', 'cmdline']):
if proc.info['cmdline'] and **'--foo'** in proc.info['cmdline']:
running_processes.append(proc)
for process in running_processes:
process.kill()
print(process, ' got shot')
if not running_processes:
print('tudo fechado')
shutil.rmtree(**'X:/path/to/dir'**) # shutil to delete the folder
print('pasta excluida')
break
I'm using this at the end of each execution and on exceptions:
if __name__ == '__main__':
while True:
try:
get_openings()
gc.collect()
print('marcano 10 - secundários')
kill_driver()
random_delay()
except Exception as e:
gc.collect()
logging.exception(f"An error occurred: {str(e)}")
print(f"An error occurred: {str(e)}")
kill_driver()
random_delay()
Best of luck and don't forget the imports!
© 2022 - 2024 — McMap. All rights reserved.