The docs were relying on the grunt/util module for getting version info
but this was unreliable and full of custom regexes. This is moved into
a new version-info module that makes much better use of the semver library.
Use the multiConfiguration ability of Protractor to start tests on multiple browsers
from the same travis cell. Group tests by type (jquery, jqlite, or docs tests) instead
of by browser. Turn on tests for jQuery.
Update the Travis and Jenkins configs to run protractor tests on Safari and Firefox as well,
and make the Travis tests run output XML and turn off color.
Fix tests which were failing in Firefox due to clear() not working as expected.
Fix tests which were failing in Safari due to SafariDriver not understanding the minus key,
and disable tests which SafariDriver has no support for.
The version information is now stored only in the tags.
By this we are able to release commits in the past, which
have already been tested, so we don't need a code freeze
or run tests any more. This is also the first step for
letting Travis do the releases in the future.
The package.json now contains the new
property 'branchVersion' that defines which tags are
valid on this branch.
Closes#6116
I think we are pretty close to be able to use both.
The xhr-polling seems to be pretty stable, but I'm having problems with multiple SSH tunnels (on BS), so let's try to switch back to SL.
We can't establish multiple SSH tunnels for the same port (for BrowserStack).
This makes it possible to run multiple parallel builds using BrowserStack.
When we refactored , we broke the csp mode because the previous implementation
relied on the fact that it was ok to lazy initialize the .csp property, this
is not the case any more.
Besides, we need to know about csp mode during bootstrap and avoid injecting the
stylesheet when csp is active, so I refactored the code to fix both issues.
PR #4411 will follow up on this commit and add more improvements.
Closes#917Closes#2963Closes#4394Closes#4444
BREAKING CHANGE: triggering ngCsp directive via `ng:csp` attribute is not
supported any more. Please use data-ng-csp instead.
All browsers except from Chrome implemented both the old
"//@ sourceMappingURL" and the new "//# sourceMappingURL" pragmas
in the same version so the only reason to keep the old one was Chrome.
However, Chrome 29, i.e. current stable version already supports
the new pragma so there's no need to wait any longer.
Some browser does not allow to proxy localhost and so SL uses another proxy on the VM. This proxy only proxies some ports (SauceConnect proxies all ports).
This is the issue why Safari didn't connect for e2e tests, because 9877 was not proxied.
This change makes sure we use SL enabled ports.
Also instead of running everything in parallel, there are only two parallel tasks:
- e2e tests running in the background (only on Chrome)
- all the unit tests running sequentially
Grunt is configured to run `npm install` before every task. That is convenient when switching a branch for example.
On Travis, this makes no sense and is causing tons of NPM warnings (eg. packages not defining repository field etc).
When running locally, there's not TRAVIS_JOB_NUMBER env variable defined and it screws
the Sauce Connect (it uses a tunnel with empty name), this makes it work locally without defining
TRAVIS_JOB_NUMBER env variable.
Also, if you run the sauce_connect_setup.sh locally, without having SAUCE_CONNECT_READY_FILE, it
does not pass the `--ready-file` argument to avoid Sauce Connect blowing up.
Changes:
- Fix our old code to use bower_components/ as the install dir
- Fix the Bootstrap asset to use github.com/twbs/bootstrap (it moved)
- Fail the build on Bower failure. Bower should not fail silently.
angular.css is used by the utils.js CSS wrap operation, but ng-hide or
any other CSS styles present in angular.css cannot be overridden unless
the styles appear before the stylesheet is in place. This fix allows
for this to work
- parallelize the tasks
- cache requests (e2e tests)
This reduces the time from ~18min to ~12min.
It makes the output little messy. We could buffer output of each task and display it once it's fully finished, nicely. I think giving instant feedback is better.