Intern Concurrency Problem
Update: I've found the root issue and have detailed it at the end of this post.
Over the past year I've created and implemented Selenium testing on the Mozilla Developer Network using the Intern interface created by SitePen. Intern's been awesome; sure there's a learning curve with async JavaScript coding but it's simple when you get the hang of it.
One problem I encountered with functional testing via service like BrowserStack and Sauce Labs is that we get failures we generally don't get when testing locally. When I tested with one browser everything went well but testing multiple browsers sent our tests into a spiral of transient failures. When I tweaked one setting, however, everything went to plan:
// Maximum number of simultaneous integration tests that should be executed on the remote WebDriver service maxConcurrency: 1,
Setting the maxConcurrency
value down to 1 was all we needed to do. Instead of all browsers spawning at once, each test runs within one browser and then within another. Bingo!
As to what was causing the ultimate issue, I'm not quite sure. The tests included authorization and login testing so it's possible there were overlaps in signing in and out, causing confusion on the server side. Regardless, if you need to get things moving quickly, limit the maxConcurrency
setting and you may start seeing loads more test passes.
Update: Firefox + Focus + Selenium Bug
After loads of testing and digging, I found the root issue for my problems with tests passing when one browser is run and not when run concurrently: when Firefox is not the focused/"on top" browser, focus events don't get passed up. And in my specific case, I was testing CSS animations, and those don't occur when a browser isn't focused. Hopefully this bug is fixed in Selenium too!
Yes, but your test suite will eventually take forever to finish.
You could try improving the isolation of the different scenario’s. Have you tried using seperate users for each scenario?
Best case you have a webservice for creating test users which you call at the beginning of every scenario.
I do use a web service to get test credentials for one test but I don’t want to flood our environments with test-only users.
Have you tried using separate users for each scenario?
Sacrificing ui test concurrency is usually a bad idea, something you will definitely regret later.
Separate users would be a great idea and something I was hoping I wouldn’t have to do. Going to give that a shot today! :)
Oops I accidentally double posted!
Create a web-service to clean up the test-only users and use a new user for every scenario.
That’s the safest approach
Um… are you sure that your test failures aren’t a symptom of real race conditions in your code?
I think the race condition is login-related since testing the browsers individually works fine.