Compare commits

..

263 Commits

Author SHA1 Message Date
Daniel García
10dadfca06 Update web vault to 2022.12.0 2022-12-18 20:37:01 +01:00
Daniel García
bf73a8235f Merge branch 'BlackDex-fix-yubikey-panic' 2022-12-18 20:32:10 +01:00
BlackDex
67a584c1d4 Disable groups by default and Some optimizations
- Put groups support behind a feature flag, and disabled by default.
  The reason is that it has some known issues, but we want to keep
  optimizing this feature. Putting it behind a feature flag could help
  some users, and the developers into optimizing this feature without to
  much trouble.

Further:

- Updates Rust to v1.66.0
- Updated GHA workflows
- Updated Alpine to 3.17
- Updated jquery to v3.6.2
- Moved jdenticon.js to load at the bottom, fixes an issue on chromium
- Added autocomplete attribute to admin login password field
- Added some extra CSP options (Tested this on Safari, Firefox, Chrome, Bitwarden Desktop)
- Moved uppercase convertion from runtime to compile-time using `paste`
  for building the environment variables, lowers heap allocations.
2022-12-18 20:32:06 +01:00
BlackDex
8e5f03972e Fix recover-2fa not working.
When audit logging was introduced there entered a small bug preventing
the recover-2fa from working.

This PR fixes that by add a new headers check to extract the device-type
when possible and use that for the logging.

Fixes #2985
2022-12-18 20:32:06 +01:00
Daniel García
d8abf8f98f Merge branch 'BlackDex-some-optimizations' 2022-12-18 20:31:22 +01:00
BlackDex
cb348d2e05 Fix recover-2fa not working.
When audit logging was introduced there entered a small bug preventing
the recover-2fa from working.

This PR fixes that by add a new headers check to extract the device-type
when possible and use that for the logging.

Fixes #2985
2022-12-18 20:31:17 +01:00
Daniel García
aceb111024 Merge branch 'BlackDex-issue-2985' 2022-12-18 20:25:46 +01:00
BlackDex
b60a4a68c7 Fix a panic during Yubikey register/login
The yubico crate uses blocking reqwest, and we called the `verify` from
a async thread. To prevent issues we need to wrap it within a
`spawn_blocking`.
2022-12-18 17:57:35 +01:00
BlackDex
8b6dfe48b7 Disable groups by default and Some optimizations
- Put groups support behind a feature flag, and disabled by default.
  The reason is that it has some known issues, but we want to keep
  optimizing this feature. Putting it behind a feature flag could help
  some users, and the developers into optimizing this feature without to
  much trouble.

Further:

- Updates Rust to v1.66.0
- Updated GHA workflows
- Updated Alpine to 3.17
- Updated jquery to v3.6.2
- Moved jdenticon.js to load at the bottom, fixes an issue on chromium
- Added autocomplete attribute to admin login password field
- Added some extra CSP options (Tested this on Safari, Firefox, Chrome, Bitwarden Desktop)
- Moved uppercase convertion from runtime to compile-time using `paste`
  for building the environment variables, lowers heap allocations.
2022-12-16 14:52:42 +01:00
BlackDex
6154e03c05 Fix recover-2fa not working.
When audit logging was introduced there entered a small bug preventing
the recover-2fa from working.

This PR fixes that by add a new headers check to extract the device-type
when possible and use that for the logging.

Fixes #2985
2022-12-15 15:57:30 +01:00
Daniel García
d0b53a6a3d Update web vault to v2022.11.2 2022-12-12 23:11:46 +01:00
Daniel García
317aa679cf Merge branch 'BlackDex-issue-2975' 2022-12-12 22:56:32 +01:00
BlackDex
8d1bc2e539 Fix org export (again)
It looks like Bitwarden, in-the-end, didn't changed the export feature
on v2022.11.0, and now have put in on v2023.1.0.

This patch now changes that to the same version.
Before those new clients are being released, we should see if they
changed that again, and adjust where needed.
2022-12-12 22:56:14 +01:00
BlackDex
50c46f6e9a Remove ctrlc crate and some updates
- Removed ctrlc crate and use the tokio provided ctrl_c function.
- Updated some crates.
2022-12-12 22:56:10 +01:00
Helmut K. C. Tessarek
4f1928778a use 32x32 favicon for consistency 2022-12-12 22:56:09 +01:00
Helmut K. C. Tessarek
5fcba3d7f5 use black favicon for /admin 2022-12-12 22:56:09 +01:00
Helmut K. C. Tessarek
4db42b07c4 Improve comments
- The first one was not a proper sentence.
- The second one mixed passive and active form in the secon d part of the sentence.
2022-12-12 22:56:09 +01:00
BlackDex
cd3e2d7a5a Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-12 22:56:09 +01:00
Daniel García
d139e22042 Merge branch 'BlackDex-fix-org-export' 2022-12-12 22:55:56 +01:00
BlackDex
892296e6d5 Remove ctrlc crate and some updates
- Removed ctrlc crate and use the tokio provided ctrl_c function.
- Updated some crates.
2022-12-12 22:55:17 +01:00
Helmut K. C. Tessarek
992ef399ed use 32x32 favicon for consistency 2022-12-12 22:55:17 +01:00
Helmut K. C. Tessarek
5afba46743 use black favicon for /admin 2022-12-12 22:55:16 +01:00
Helmut K. C. Tessarek
df0aa7949e Improve comments
- The first one was not a proper sentence.
- The second one mixed passive and active form in the secon d part of the sentence.
2022-12-12 22:55:16 +01:00
BlackDex
353d2e6e01 Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-12 22:55:16 +01:00
Daniel García
f9375bb215 Merge branch 'BlackDex-replace-ctrlc-crate' 2022-12-12 22:55:06 +01:00
Helmut K. C. Tessarek
8d04ff66e7 use 32x32 favicon for consistency 2022-12-12 22:55:02 +01:00
Helmut K. C. Tessarek
e649b11511 use black favicon for /admin 2022-12-12 22:55:02 +01:00
Helmut K. C. Tessarek
bda19bdddf Improve comments
- The first one was not a proper sentence.
- The second one mixed passive and active form in the secon d part of the sentence.
2022-12-12 22:55:01 +01:00
BlackDex
99fd92df21 Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-12 22:55:01 +01:00
Daniel García
1210310063 Merge branch 'tessus-fix/admin-icon' 2022-12-12 22:54:49 +01:00
Helmut K. C. Tessarek
b093384385 Improve comments
- The first one was not a proper sentence.
- The second one mixed passive and active form in the secon d part of the sentence.
2022-12-12 22:54:45 +01:00
BlackDex
cec45ae9bd Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-12 22:54:45 +01:00
Daniel García
e6dd584dd6 Merge branch 'tessus-fix/env-template' 2022-12-12 22:54:34 +01:00
BlackDex
7cc74dabaf Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-12 22:54:30 +01:00
Daniel García
2336f102f9 Merge branch 'BlackDex-issue-2929' 2022-12-12 22:53:48 +01:00
BlackDex
cebe0f6442 Remove ctrlc crate and some updates
- Removed ctrlc crate and use the tokio provided ctrl_c function.
- Updated some crates.
2022-12-12 12:58:48 +01:00
BlackDex
d9c0c23819 Revert collection queries back to left_join
Using the `inner_join` seems to cause issues, even though i have tested
it. Strangely it does cause issues. Reverting it back to `left_join`
seems to solve the issue for me.

Fixes #2975
2022-12-12 12:21:48 +01:00
BlackDex
aa355a96f9 Fix org export (again)
It looks like Bitwarden, in-the-end, didn't changed the export feature
on v2022.11.0, and now have put in on v2023.1.0.

This patch now changes that to the same version.
Before those new clients are being released, we should see if they
changed that again, and adjust where needed.
2022-12-12 11:17:34 +01:00
BlackDex
4a85dd2480 Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-10 17:55:59 +01:00
Helmut K. C. Tessarek
213909baa5 use 32x32 favicon for consistency 2022-12-09 19:09:35 -05:00
Helmut K. C. Tessarek
6915a60332 use black favicon for /admin 2022-12-09 17:32:59 -05:00
Helmut K. C. Tessarek
52a50e9ade Improve comments
- The first one was not a proper sentence.
- The second one mixed passive and active form in the secon d part of the sentence.
2022-12-09 16:31:40 -05:00
Daniel García
b7c9a346c1 Merge branch 'stefan0xC-use-custom-404-page' 2022-12-08 20:43:38 +01:00
BlackDex
2d90c6ac24 Fix managers and groups link
This PR should fix the managers and group link.
Although i think there might be a cleaner sollution, there are a lot of
other items to fix here which we should do in time.

But for now, with theh group support already merged, this fix should at
least help solving issue #2932.

Fixes #2932
2022-12-08 20:43:34 +01:00
Daniel García
7f7b5447fd Merge branch 'BlackDex-issue-2932-take2' 2022-12-08 20:43:15 +01:00
BlackDex
142f7bb50d Fix managers and groups link
This PR should fix the managers and group link.
Although i think there might be a cleaner sollution, there are a lot of
other items to fix here which we should do in time.

But for now, with theh group support already merged, this fix should at
least help solving issue #2932.

Fixes #2932
2022-12-08 12:37:21 +01:00
Stefan Melmuk
d209df9e10 use a custom 404 page
to customize the 404 page you can copy the handlebar template
`src/static/templates/404.hbs` to the TEMPLATES_FOLDER (defaults to
`data/templates/`)
2022-12-05 00:08:46 +01:00
Daniel García
1b56f4266b Merge branch 'BlackDex-sql-debugging' 2022-12-04 23:17:52 +01:00
BlackDex
d6dc6070f3 Fix admin repost warning.
Currently when you login into the admin, and then directly hit the save
button, it will come with a re-post/re-submit warning.
This has to do with the `window.location.reload()` function, which
triggers the admin login POST again.

By changing the way to reload the page, we prevent this repost.
2022-12-04 23:17:49 +01:00
BlackDex
d66323b742 Limit Cipher Note encrypted string size
As discussed in #2937, this will limit the amount of encrypted
characters to 10.000 characters, same as Bitwarden.
This will not break current ciphers which exceed this limit, but it will prevent those
ciphers from being updated.

Fixes #2937
2022-12-04 23:17:48 +01:00
BlackDex
7b09d74b1f Update dependencies for Rust and Admin interface.
- Updated Rust deps and one small change regarding chrono
- Updated bootstrap 5 css
- Updated datatables
- Replaced identicon.js with jdenticon.
  identicon.js is unmaintained ( https://github.com/stewartlord/identicon.js/issues/52 )
  The icon's are very different, but nice. It also doesn't need custom
  code to find and update the icons our selfs.
2022-12-04 23:17:48 +01:00
BlackDex
c0e3c2c5e1 Cleanups and Fixes for Emergency Access
- Several cleanups and code optimizations for Emergency Access
- Fixed a race-condition regarding jobs for Emergency Access
- Some other small changes like `allow(clippy::)` removals

Fixes #2925
2022-12-04 23:17:48 +01:00
Daniel García
06189a58fe Merge branch 'BlackDex-fix-admin-repost' 2022-12-04 23:16:54 +01:00
BlackDex
f402dd81bb Limit Cipher Note encrypted string size
As discussed in #2937, this will limit the amount of encrypted
characters to 10.000 characters, same as Bitwarden.
This will not break current ciphers which exceed this limit, but it will prevent those
ciphers from being updated.

Fixes #2937
2022-12-04 23:16:50 +01:00
BlackDex
c885bbc947 Update dependencies for Rust and Admin interface.
- Updated Rust deps and one small change regarding chrono
- Updated bootstrap 5 css
- Updated datatables
- Replaced identicon.js with jdenticon.
  identicon.js is unmaintained ( https://github.com/stewartlord/identicon.js/issues/52 )
  The icon's are very different, but nice. It also doesn't need custom
  code to find and update the icons our selfs.
2022-12-04 23:16:50 +01:00
BlackDex
63fb0e5a57 Cleanups and Fixes for Emergency Access
- Several cleanups and code optimizations for Emergency Access
- Fixed a race-condition regarding jobs for Emergency Access
- Some other small changes like `allow(clippy::)` removals

Fixes #2925
2022-12-04 23:16:49 +01:00
Daniel García
37d0792a7d Merge branch 'BlackDex-issue-2937' 2022-12-04 23:15:08 +01:00
BlackDex
c8040d2f63 Update dependencies for Rust and Admin interface.
- Updated Rust deps and one small change regarding chrono
- Updated bootstrap 5 css
- Updated datatables
- Replaced identicon.js with jdenticon.
  identicon.js is unmaintained ( https://github.com/stewartlord/identicon.js/issues/52 )
  The icon's are very different, but nice. It also doesn't need custom
  code to find and update the icons our selfs.
2022-12-04 23:15:03 +01:00
BlackDex
dbcad65b68 Cleanups and Fixes for Emergency Access
- Several cleanups and code optimizations for Emergency Access
- Fixed a race-condition regarding jobs for Emergency Access
- Some other small changes like `allow(clippy::)` removals

Fixes #2925
2022-12-04 23:15:03 +01:00
Daniel García
226da67bc0 Merge branch 'BlackDex-update-deps' 2022-12-04 23:14:41 +01:00
BlackDex
fee2b5c3fb Cleanups and Fixes for Emergency Access
- Several cleanups and code optimizations for Emergency Access
- Fixed a race-condition regarding jobs for Emergency Access
- Some other small changes like `allow(clippy::)` removals

Fixes #2925
2022-12-04 23:14:37 +01:00
Daniel García
6bbb3d53ae Merge branch 'BlackDex-emergency-access-cleanup' 2022-12-04 23:14:02 +01:00
BlackDex
610b183cef Update dependencies for Rust and Admin interface.
- Updated Rust deps and one small change regarding chrono
- Updated bootstrap 5 css
- Updated datatables
- Replaced identicon.js with jdenticon.
  identicon.js is unmaintained ( https://github.com/stewartlord/identicon.js/issues/52 )
  The icon's are very different, but nice. It also doesn't need custom
  code to find and update the icons our selfs.
2022-12-04 18:38:46 +01:00
BlackDex
1b64b9e164 Add dev-only query logging support
This PR adds query logging support as an optional feature.
It is only allowed during development/debug builds, and will abort when
used during a `--release` build.

For this feature to be fully activated you also need to se an
environment variable `QUERY_LOGGER=1` to activate the debug log-level
for this crate, else there will be no output.

The reason for this PR is that sometimes it is useful to be able to see
the generated queries, like when debugging an issue, or trying to
optimize a query. Currently i always added this code when needed, but
having this a part of the code could benifit other developers too who
maybe need this.
2022-12-03 18:36:46 +01:00
BlackDex
b022be9ba8 Fix admin repost warning.
Currently when you login into the admin, and then directly hit the save
button, it will come with a re-post/re-submit warning.
This has to do with the `window.location.reload()` function, which
triggers the admin login POST again.

By changing the way to reload the page, we prevent this repost.
2022-12-03 17:28:25 +01:00
BlackDex
7f11363725 Limit Cipher Note encrypted string size
As discussed in #2937, this will limit the amount of encrypted
characters to 10.000 characters, same as Bitwarden.
This will not break current ciphers which exceed this limit, but it will prevent those
ciphers from being updated.

Fixes #2937
2022-12-02 16:25:11 +01:00
BlackDex
4aa6dd22bb Cleanups and Fixes for Emergency Access
- Several cleanups and code optimizations for Emergency Access
- Fixed a race-condition regarding jobs for Emergency Access
- Some other small changes like `allow(clippy::)` removals

Fixes #2925
2022-12-02 09:44:23 +01:00
Daniel García
8feed2916f Update web vault to v2022.11.1 2022-12-01 22:53:47 +01:00
Daniel García
59eaa0aa0d Merge branch 'stefan0xC-forward-to-admin-login' 2022-12-01 22:41:00 +01:00
Stefan Melmuk
d5e54cb576 only check sqlite parent if there could be one 2022-12-01 22:38:59 +01:00
Stefan Melmuk
8837660ba7 check if sqlite folder exists
instead of creating the parent folders to a sqlite database
vaultwarden should just exit if it does not.

this should fix issues like #2835 when a wrongly configured
`DATABASE_URL` falls back to using sqlite
2022-12-01 22:38:59 +01:00
BlackDex
464a489b44 Update Vaultwarden Logo's
Updated the logo's so the `V` is better visible.
Also the cog it self is better now, the previous version wasn't fully round.
These versions also are used with the PR to update the web-vault and use these logo's.

Also updated the images in the static folder.
2022-12-01 22:38:59 +01:00
BlackDex
7035700c8d Add Organizational event logging feature
This PR adds event/audit logging support for organizations.
By default this feature is disabled, since it does log a lot and adds
extra database transactions.

All events are touched except a few, since we do not support those
features (yet), like SSO for example.

This feature is tested with multiple clients and all database types.

Fixes #229
2022-12-01 22:38:59 +01:00
Daniel García
23c2921690 Merge branch 'stefan0xC-check-sqlite-folder' 2022-12-01 22:36:01 +01:00
BlackDex
7d506f3633 Update Vaultwarden Logo's
Updated the logo's so the `V` is better visible.
Also the cog it self is better now, the previous version wasn't fully round.
These versions also are used with the PR to update the web-vault and use these logo's.

Also updated the images in the static folder.
2022-12-01 22:35:57 +01:00
BlackDex
b186813049 Add Organizational event logging feature
This PR adds event/audit logging support for organizations.
By default this feature is disabled, since it does log a lot and adds
extra database transactions.

All events are touched except a few, since we do not support those
features (yet), like SSO for example.

This feature is tested with multiple clients and all database types.

Fixes #229
2022-12-01 22:35:57 +01:00
Daniel García
bfa82225da Merge pull request #2940 from BlackDex/update-logos
Update Vaultwarden Logo's
2022-12-01 22:10:37 +01:00
Daniel García
ffa2044563 Merge pull request #2868 from BlackDex/impl-events
Add Organizational event logging feature
2022-12-01 22:09:27 +01:00
BlackDex
d57b69952d Update Vaultwarden Logo's
Updated the logo's so the `V` is better visible.
Also the cog it self is better now, the previous version wasn't fully round.
These versions also are used with the PR to update the web-vault and use these logo's.

Also updated the images in the static folder.
2022-12-01 12:32:24 +01:00
Stefan Melmuk
5a13efefd3 only check sqlite parent if there could be one 2022-11-28 22:54:03 +01:00
Stefan Melmuk
2f9d7060bd check if sqlite folder exists
instead of creating the parent folders to a sqlite database
vaultwarden should just exit if it does not.

this should fix issues like #2835 when a wrongly configured
`DATABASE_URL` falls back to using sqlite
2022-11-28 22:54:02 +01:00
Stefan Melmuk
0aa33a2cb4 don't use param for passing the redirect info
revert some changes and also rename catcher to `admin_login` to make its
function clearer

Co-authored-by: BlackDex <black.dex@gmail.com>
2022-11-28 18:21:30 +01:00
Stefan Melmuk
fa7dbedd5d redirect to admin login page when forward fails
currently, if the admin guard fails the user will get a 404 page.
and when the session times out after 20 minutes post methods will
give the reason "undefined" as a response while generating the support
string will fail without any user feedback.

this commit changes the error handling on admin pages

* by removing the reliance on Rockets forwarding and making the login
  page an explicit route that can be redirected to from all admin pages

* by removing the obsolete and mostly unused Referer struct we can
  redirect the user back to the requested admin page directley

* by providing an error message for json requests the
  `get_diagnostics_config` and all post methods can return a more
  comprehensible message and the user can be alerted

* the `admin_url()` function can be simplified because rfc2616 has been
  obsoleted by rfc7231 in 2014 (and also by the recently released
  rfc9110) which allows relative urls in the Location header.

  c.f. https://www.rfc-editor.org/rfc/rfc7231#section-7.1.2 and
  https://www.rfc-editor.org/rfc/rfc9110#section-10.2.2
2022-11-28 16:46:06 +01:00
BlackDex
2ea9b66943 Add Organizational event logging feature
This PR adds event/audit logging support for organizations.
By default this feature is disabled, since it does log a lot and adds
extra database transactions.

All events are touched except a few, since we do not support those
features (yet), like SSO for example.

This feature is tested with multiple clients and all database types.

Fixes #229
2022-11-27 23:36:34 +01:00
Daniel García
f3beaea9e9 Merge pull request #2933 from stefan0xC/fix-manager-issue
allow managers to set groups of a collection
2022-11-27 22:02:10 +01:00
Daniel García
39ae2f1f76 Merge pull request #2928 from karbobc/settings-description
Update settings description
2022-11-27 22:01:54 +01:00
Daniel García
366b1050ec Merge pull request #2921 from BlackDex/issue-2909
Prevent DNS leak when icon regex is configured
2022-11-27 22:00:54 +01:00
Daniel García
b3aab7a6ad Merge pull request #2920 from BlackDex/issue-2889
Added missing `register` endpoint to `identity`
2022-11-27 22:00:23 +01:00
Daniel García
aa8d050d6b Merge pull request #2919 from BlackDex/issue-2828
Fully remove DuckDuckGo email service.
2022-11-27 21:59:51 +01:00
Daniel García
5200f0e98d Merge pull request #2918 from BlackDex/issue-2761
Set "Bypass admin page security" as read-only
2022-11-27 21:59:39 +01:00
Daniel García
5f4abb1b7f Merge pull request #2911 from skid9000/patch-1
Update config comment to reflect rfc8314.
2022-11-27 21:59:18 +01:00
Daniel García
dfe1e30d1b Merge pull request #2910 from samueltardieu/const-crypto
Use constant size generic parameter for random bytes generation
2022-11-27 21:58:27 +01:00
Stefan Melmuk
e27a5be47a allow managers to set groups of a collection
fixes #2932
2022-11-23 15:47:45 +01:00
Karbob
56786a18f1 Update settings description
Update description to `admin login requests`.
2022-11-22 22:12:06 +08:00
BlackDex
0d2399d485 Prevent DNS leak when icon regex is configured
When a icon blacklist regex was configured to not check for a domain, it
still did a DNS lookup first. This could cause a DNS leakage for these
regex blocked domains.

This PR resolves this issue by first checking the regex, and afterwards
the other checks.

Fixes #2909
2022-11-14 17:25:44 +01:00
BlackDex
5bfc7cfde3 Added missing register endpoint to identity
In the upcomming web-vault and other clients they changed the register
endpoint from `/api/accounts/register` to `/identity/register`.

This PR adds the new endpoint to already be compatible with the new
clients.

Fixes #2889
2022-11-14 17:22:37 +01:00
BlackDex
723f0cbc1e Fully remove DuckDuckGo email service.
The DuckDuckGo email service is not supported for self-hosted servers.
This option is already hidden via the latest web-vault.

This PR also removes some server side headers.

Fixes #2828
2022-11-14 17:19:30 +01:00
BlackDex
b141f789f6 Set "Bypass admin page security" as read-only
It was possible to disable the admin security via the admin interface.
This is kinda insecure as mentioned in #2761.

This PR set this value as read-only and admin's need to set the correct ENV variable.
Currently saved settings which do override this are still valid though.
If an admin want's this removed, they either need to reset the config,
or change the value in the `config.json` file.

Fixes #2761
2022-11-14 17:18:25 +01:00
Samuel Tardieu
7445ee40f8 Remove get_random_64()
Its uses are replaced by get_randm_bytes() or encode_random_bytes().
2022-11-13 10:03:06 +01:00
Skid
4a9a0f7e64 Update .env.template
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2022-11-12 22:39:41 +01:00
Skid
63aad2e5d2 Update config comment to reflect rfc8314. 2022-11-12 22:07:03 +01:00
Samuel Tardieu
d0baa23f9a Use constant size generic parameter for random bytes generation
All uses of `get_random()` were in the form of:

  `&get_random(vec![0u8; SIZE])`

with `SIZE` being a constant.

Building a `Vec` is unnecessary for two reasons. First, it uses a
very short-lived dynamic memory allocation. Second, a `Vec` is a
resizable object, which is useless in those context when random
data have a fixed size and will only be read.

`get_random_bytes()` takes a constant as a generic parameter and
returns an array with the requested number of random bytes.

Stack safety analysis: the random bytes will be allocated on the
caller stack for a very short time (until the encoding function has
been called on the data). In some cases, the random bytes take
less room than the `Vec` did (a `Vec` is 24 bytes on a 64 bit
computer). The maximum used size is 180 bytes, which makes it
for 0.008% of the default stack size for a Rust thread (2MiB),
so this is a non-issue.

Also, most of the uses of those random bytes are to encode them
using an `Encoding`. The function `crypto::encode_random_bytes()`
generates random bytes and encode them with the provided
`Encoding`, leading to code deduplication.

`generate_id()` has also been converted to use a constant generic
parameter as well since the length of the requested String is always
a constant.
2022-11-11 11:59:27 +01:00
Daniel García
7a7673103f Merge branch 'BlackDex-org-export-fixes' 2022-11-09 22:40:05 +01:00
GeekCorner
05d4788d1d fix: removed a double space 2022-11-09 22:40:01 +01:00
BlackDex
6f0dea1b56 Add /devices/knowndevice endpoint
Added a new endpoint which the currently beta client for at least
Android v2022.10.1 seems to be calling, and crashes with the response we
currently provide

Fixes #2890
Fixes #2891
Fixes #2892
2022-11-09 22:40:00 +01:00
BlackDex
439ef44973 Update Rust version, deps and workflow
- Update Rust to v1.65.0
- Update dependencies
- Updated workflow files
- Added some extra clippy checks
- Fixed some clippy checks
2022-11-09 22:40:00 +01:00
Daniel García
2a525b42cb Merge branch 'GeekCornerGH-fix-double-space-email' 2022-11-09 22:38:38 +01:00
BlackDex
aee91acfdc Add /devices/knowndevice endpoint
Added a new endpoint which the currently beta client for at least
Android v2022.10.1 seems to be calling, and crashes with the response we
currently provide

Fixes #2890
Fixes #2891
Fixes #2892
2022-11-09 22:38:34 +01:00
BlackDex
17388ec43e Update Rust version, deps and workflow
- Update Rust to v1.65.0
- Update dependencies
- Updated workflow files
- Added some extra clippy checks
- Fixed some clippy checks
2022-11-09 22:38:34 +01:00
Daniel García
bdc1cd13a7 Merge branch 'BlackDex-add-knowndevice-endpoint' 2022-11-09 22:37:53 +01:00
BlackDex
42db4b5c77 Update Rust version, deps and workflow
- Update Rust to v1.65.0
- Update dependencies
- Updated workflow files
- Added some extra clippy checks
- Fixed some clippy checks
2022-11-09 22:37:48 +01:00
Daniel García
53da073274 Merge branch 'BlackDex-update-rust-and-deps' 2022-11-09 22:37:19 +01:00
BlackDex
b010dde661 Update Rust version, deps and workflow
- Update Rust to v1.65.0
- Update dependencies
- Updated workflow files
- Added some extra clippy checks
- Fixed some clippy checks
2022-11-08 14:03:31 +01:00
BlackDex
c9ec389b24 Support Org Export for v2022.11 clients
Since v2022.9.x the org export uses a different endpoint.
But, since v2022.11.x this endpoint will return a different format.
See: https://github.com/bitwarden/clients/pull/3641 and https://github.com/bitwarden/server/pull/2316

To support both version in the case of users having an older client
either web-vault or cli this PR checks the version and responds using
the correct format. If no version can be determined it will use the new
format as a default.
2022-11-07 17:13:34 +01:00
GeekCorner
baa2841b04 fix: removed a double space 2022-11-07 08:39:56 +01:00
BlackDex
6af5c86081 Add /devices/knowndevice endpoint
Added a new endpoint which the currently beta client for at least
Android v2022.10.1 seems to be calling, and crashes with the response we
currently provide

Fixes #2890
Fixes #2891
Fixes #2892
2022-11-06 18:07:09 +01:00
Daniel García
f60a6929a9 Update dependencies 2022-10-26 21:42:38 +02:00
Daniel García
2aa97fa121 Update web vault to v2022.10.2 2022-10-26 21:42:37 +02:00
Jeremy Lin
b59809af46 Sync global_domains.json to bitwarden/server@7c783c9 (Atlassian) 2022-10-26 21:42:37 +02:00
Stefan Melmuk
ed24d51d3e validate cron expressions on startup 2022-10-26 21:42:36 +02:00
Stefan Melmuk
870f0d0932 validate billing_email on save 2022-10-26 21:42:36 +02:00
GeekCorner
31b77bf178 feat: Bump web-vault to v2022.10.1 2022-10-23 18:34:12 +02:00
Daniel García
b525f9aa4c Merge pull request #2724 from dani-garcia/diesel2
Update diesel to 2.0.2
2022-10-23 00:49:57 +02:00
Daniel García
8409b31d6b Update to diesel2 2022-10-23 00:49:23 +02:00
Daniel García
b878495d64 Update sponsors 2022-10-23 00:49:06 +02:00
Daniel García
945b85da2f Merge pull request #2852 from BlackDex/update-github-workflows
Update github workflows
2022-10-21 18:42:47 +02:00
BlackDex
d4577d161e Update github workflows
Updated all the actions where possible.
This should at least for the most actions mean that they are using the
latest github actions core and should partially resolve the issue #2847

The only actions left are the actions from `actions-rs`.
2022-10-21 13:21:57 +02:00
Daniel García
3c8e1c3ca9 Add liberapay funding option 2022-10-20 21:07:26 +02:00
Daniel García
88dba8c4dd Merge pull request #2846 from MFijak/group_support_diff
Group support | applied .diff
2022-10-20 21:04:46 +02:00
MFijak
21bc3bfd53 group support 2022-10-20 15:31:53 +02:00
Daniel García
4cb5122e90 Merge pull request #2844 from jjlin/healthcheck
Take `ROCKET_ADDRESS` into account in the Docker healthcheck
2022-10-20 12:26:09 +02:00
Jeremy Lin
0a2a8be0ff Take ROCKET_ADDRESS into account in the Docker healthcheck 2022-10-20 01:04:09 -07:00
Daniel García
720a046610 Merge pull request #2804 from stefan0xC/verify-on-invite
verify email on registration by invite
2022-10-19 23:09:47 +02:00
Stefan Melmuk
64ae5d4f81 verify email on registration via invite link
if `SIGNUPS_VERIFY` is enabled new users that have been invited have
their onboarding flow interrupted because they have to first verify
their mail address before they can join an organization.

we can skip the extra verication of the email address when signing up
because a valid invitation token already means that the email address is
working and we don't allow invited users to signup with a different
address.

unfortunately, this is not possible with emergency access invitations
at the moment as they are handled differently.
2022-10-19 22:44:17 +02:00
Daniel García
ff7e22c08a Merge pull request #2840 from jjlin/global-domains
Sync global_domains.json
2022-10-19 21:56:22 +02:00
Jeremy Lin
0c267d073f Sync global_domains.json to bitwarden/server@ea300b2 (Amazon) 2022-10-19 12:33:04 -07:00
Daniel García
bbc6470f65 Merge branch 'BlackDex-fix-password-hint' 2022-10-19 20:40:24 +02:00
Stefan Melmuk
23f1f8a576 allow registration without invite link
if signups are allowed invited users should be able to complete their
registration even when they don't have the invite link at hand.
2022-10-19 20:39:14 +02:00
Stefan Melmuk
0e6f6e612a use static_files() for email attachments
Apply suggestions from code review

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2022-10-19 20:39:13 +02:00
Stefan Melmuk
4d1b860dad attach images to email
Set SMTP_EMBED_IMAGES option to false if you don't want to attach images
to the mail.

NOTE: If you have customized the template files `email_header.hbs` and
`email_footer.hbs` you can replace `{url}/vw_static/` to `{img_url}`
to support both URL schemes
2022-10-19 20:39:13 +02:00
Stefan Melmuk
6576914e55 fix invitations of new users when mail is disabled
If you add a new user that has already been Invited to another
organization they will be Accepted automatically. This should not be
possible because they cannot be Confirmed until they have completed
their registration. It is also not necessary because their invitation
will be accepted automatically once they register.
2022-10-19 20:39:07 +02:00
Daniel García
12075639f3 Merge branch 'stefan0xC-allow-registration-without-invite-link' 2022-10-19 20:23:30 +02:00
Stefan Melmuk
3b9bfe55d0 use static_files() for email attachments
Apply suggestions from code review

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2022-10-19 20:23:24 +02:00
Stefan Melmuk
a0c6a7c0de attach images to email
Set SMTP_EMBED_IMAGES option to false if you don't want to attach images
to the mail.

NOTE: If you have customized the template files `email_header.hbs` and
`email_footer.hbs` you can replace `{url}/vw_static/` to `{img_url}`
to support both URL schemes
2022-10-19 20:23:24 +02:00
Stefan Melmuk
a2d716aec3 fix invitations of new users when mail is disabled
If you add a new user that has already been Invited to another
organization they will be Accepted automatically. This should not be
possible because they cannot be Confirmed until they have completed
their registration. It is also not necessary because their invitation
will be accepted automatically once they register.
2022-10-19 20:23:24 +02:00
Daniel García
c1c60e3b68 Merge branch 'stefan0xC-email-attach-images' 2022-10-19 20:22:03 +02:00
Stefan Melmuk
ed6e852904 fix invitations of new users when mail is disabled
If you add a new user that has already been Invited to another
organization they will be Accepted automatically. This should not be
possible because they cannot be Confirmed until they have completed
their registration. It is also not necessary because their invitation
will be accepted automatically once they register.
2022-10-19 20:21:58 +02:00
Daniel García
a54065420c Merge branch 'stefan0xC-fix-invitation-of-new-users' 2022-10-19 20:20:14 +02:00
Stefan Melmuk
aa5a05960e allow registration without invite link
if signups are allowed invited users should be able to complete their
registration even when they don't have the invite link at hand.
2022-10-18 12:49:07 +02:00
BlackDex
f41ba2a60f Fix master password hint update not working.
- The Master Password Hint input has changed it's location to the
password update form. This PR updates the the code to process this.

- Also changed the `ProfileData` struct to exclude `Culture` and
`MasterPasswordHint`, since both are not used at all, and when not
defined they will also not be allocated.

Fixes #2833
2022-10-17 17:23:21 +02:00
Stefan Melmuk
2215cfefb9 fix invitations of new users when mail is disabled
If you add a new user that has already been Invited to another
organization they will be Accepted automatically. This should not be
possible because they cannot be Confirmed until they have completed
their registration. It is also not necessary because their invitation
will be accepted automatically once they register.
2022-10-15 16:19:26 +02:00
Stefan Melmuk
4289663a16 use static_files() for email attachments
Apply suggestions from code review

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2022-10-15 04:59:33 +02:00
Stefan Melmuk
ea19c2250e attach images to email
Set SMTP_EMBED_IMAGES option to false if you don't want to attach images
to the mail.

NOTE: If you have customized the template files `email_header.hbs` and
`email_footer.hbs` you can replace `{url}/vw_static/` to `{img_url}`
to support both URL schemes
2022-10-15 04:59:31 +02:00
Daniel García
638766b346 Update web-vault to 2022.10.0 and dependencies 2022-10-14 18:21:01 +02:00
Daniel García
d1ff136552 Merge branch 'stefan0xC-check-data-folder-permissions' 2022-10-14 17:56:48 +02:00
Jeremy Lin
46ec11de12 Update CSP for DuckDuckGo email forwarding
Upstream PR: https://github.com/bitwarden/clients/pull/3630
2022-10-14 17:56:42 +02:00
Jeremy Lin
4283a49e0b Reformat CSP header for readability 2022-10-14 17:56:42 +02:00
Jeremy Lin
1e32db8c41 Add CreationDate to cipher response JSON
Upstream PR: https://github.com/bitwarden/server/pull/2142
2022-10-14 17:56:42 +02:00
Stefan Melmuk
0f944ec7e2 fix link of license badge
master branch has been renamed to main.
2022-10-14 17:56:41 +02:00
Daniel García
736dbc9553 Merge branch 'jjlin-csp' 2022-10-14 17:56:03 +02:00
Jeremy Lin
b4a38f1f63 Add CreationDate to cipher response JSON
Upstream PR: https://github.com/bitwarden/server/pull/2142
2022-10-14 17:56:00 +02:00
Stefan Melmuk
646186fe38 fix link of license badge
master branch has been renamed to main.
2022-10-14 17:55:59 +02:00
Daniel García
c2725916f4 Merge branch 'jjlin-creation-date' 2022-10-14 17:55:31 +02:00
Stefan Melmuk
fd334e2b7d fix link of license badge
master branch has been renamed to main.
2022-10-14 17:55:27 +02:00
Daniel García
f9feca1ce4 Merge branch 'stefan0xC-fix-link-in-license-badge' 2022-10-14 17:54:57 +02:00
Stefan Melmuk
677fd2ff32 fix link of license badge
master branch has been renamed to main.
2022-10-12 20:18:18 +02:00
Jeremy Lin
f49eb8eb4d Add CreationDate to cipher response JSON
Upstream PR: https://github.com/bitwarden/server/pull/2142
2022-10-12 00:17:09 -07:00
Jeremy Lin
b0e0d68632 Update CSP for DuckDuckGo email forwarding
Upstream PR: https://github.com/bitwarden/clients/pull/3630
2022-10-11 21:39:12 -07:00
Jeremy Lin
f3c8c16d79 Reformat CSP header for readability 2022-10-11 21:39:02 -07:00
Stefan Melmuk
2dd5086916 more verbose permission denied error
be a bit more verbose about why a file could not be created when it is
caused by a permission denied error.
2022-10-12 01:31:10 +02:00
Stefan Melmuk
7532072d50 add check if data folder is a directory 2022-10-12 01:26:28 +02:00
Daniel García
382e6107fe Update dependencies 2022-10-09 17:40:45 +02:00
Daniel García
e6c6609e19 8bit Solutions LLC. -> Bitwarden, Inc. 2022-10-09 17:13:46 +02:00
Daniel García
4cb5918950 Update web vault to v2022.9.2 2022-10-09 17:13:32 +02:00
Daniel García
55030f3687 Merge branch 'stefan0xC-return-token-expired-message' 2022-10-09 16:22:33 +02:00
Stefan Melmuk
ef4072e4ff improve spelling of minimum expiration hours check
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2022-10-09 16:21:13 +02:00
Stefan Melmuk
c78d383ed1 make invitation expiration time configurable
configure the number of hours after which organization invites,
emergency access invites, email verification emails and account deletion
requests expire (defaults to 5 days or 120 hours and must be atleast 1)
2022-10-09 16:21:13 +02:00
Stefan Melmuk
5b96270874 return "Object" for consistency
Co-authored-by: Jeremy Lin <jjlin@users.noreply.github.com>
2022-10-09 16:21:12 +02:00
Stefan Melmuk
2c0742387b return CaptchaBypassToken and register object 2022-10-09 16:21:12 +02:00
Stefan Melmuk
1704d14f29 v2022.9.2 expects a json response when registering 2022-10-09 16:21:12 +02:00
Stefan Melmuk
2d7ffbf378 allow the removal of non-confirmed owners
ensure user_to_edit and user_to_delete are actually confirmed users,
before checking if they are the last owner of an organization.
2022-10-09 16:21:11 +02:00
Daniel García
dfd63f85c0 Merge branch 'stefan0xC-configure-expirations' 2022-10-09 16:20:07 +02:00
Stefan Melmuk
cd0c49eaf6 return "Object" for consistency
Co-authored-by: Jeremy Lin <jjlin@users.noreply.github.com>
2022-10-09 16:19:33 +02:00
Stefan Melmuk
080e38d227 return CaptchaBypassToken and register object 2022-10-09 16:19:32 +02:00
Stefan Melmuk
1a664fba6a v2022.9.2 expects a json response when registering 2022-10-09 16:19:32 +02:00
Stefan Melmuk
c915ef815d allow the removal of non-confirmed owners
ensure user_to_edit and user_to_delete are actually confirmed users,
before checking if they are the last owner of an organization.
2022-10-09 16:19:32 +02:00
Daniel García
adea4ec54d Merge branch 'stefan0xC-update-to-v2022.9.2' 2022-10-09 16:17:16 +02:00
Stefan Melmuk
387b5eb2dd allow the removal of non-confirmed owners
ensure user_to_edit and user_to_delete are actually confirmed users,
before checking if they are the last owner of an organization.
2022-10-09 16:17:11 +02:00
Daniel García
6337af59ed Merge branch 'stefan0xC-allow-removal-of-invited-owners' 2022-10-09 16:13:57 +02:00
Stefan Melmuk
475c7b8f16 return more descriptive JWT validation messages 2022-10-09 13:55:22 +02:00
Stefan Melmuk
ac120be1c6 improve spelling of minimum expiration hours check
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2022-10-09 05:50:43 +02:00
Stefan Melmuk
b70316e6d3 make invitation expiration time configurable
configure the number of hours after which organization invites,
emergency access invites, email verification emails and account deletion
requests expire (defaults to 5 days or 120 hours and must be atleast 1)
2022-10-08 18:37:16 +02:00
Stefan Melmuk
0a0f620d0b return "Object" for consistency
Co-authored-by: Jeremy Lin <jjlin@users.noreply.github.com>
2022-10-08 10:27:33 +02:00
Stefan Melmuk
9132cc4a30 return CaptchaBypassToken and register object 2022-10-07 08:06:55 +02:00
Stefan Melmuk
e50edcadfb v2022.9.2 expects a json response when registering 2022-10-07 03:00:52 +02:00
Stefan Melmuk
2685099720 allow the removal of non-confirmed owners
ensure user_to_edit and user_to_delete are actually confirmed users,
before checking if they are the last owner of an organization.
2022-09-27 10:21:23 +02:00
Daniel García
6fa6eb18e8 Remove unused value in config endpoint 2022-09-25 19:22:05 +02:00
Daniel García
bb79396f0e Merge branch 'stefan0xC-catch-404-errors' 2022-09-25 19:05:12 +02:00
BlackDex
da9fd6b7d0 Fix organization vault export
Since v2022.9.x it seems they changed the export endpoint and way of working.
This PR fixes this by adding the export endpoint.

Also, it looks like the clients can't handle uppercase first JSON key's.
Because of this there now is a function which converts all the key's to lowercase first.

I have an issue reported at Bitwarden if this is expected behavior: https://github.com/bitwarden/clients/issues/3606

Fixes #2760
Fixes #2764
2022-09-25 19:04:56 +02:00
BlackDex
5b8067ef77 Update libraries and Rust version
- Updated to Rust v1.64.0
- Updated all libararies
- Updated multer-rs to be based upon the latest version
- Updated Dockerfiles to match the Rust version
2022-09-25 19:04:53 +02:00
BlackDex
9eabcd5cae Add support for send v2 API endpoints
This PR adds support for the Send v2 API.
It should prevent 404 errors which could cause some issues with some
configurations on some reverse proxies.

In the long run, we can probably remove the old file upload API, but for
now lets leave it there, since Bitwarden also still has this endpoint in
the code.

Might fixes #2753
2022-09-25 19:04:48 +02:00
Aaron
d6e0d4cbbd fix: update warning and success case verbiage 2022-09-25 19:04:48 +02:00
Aaron
e5e6db2688 fix: tooltip typo 2022-09-25 19:04:48 +02:00
BlackDex
186fe24484 Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-25 19:04:47 +02:00
Daniel García
5da96d36e6 Merge branch 'BlackDex-fix-org-export' 2022-09-25 19:03:55 +02:00
BlackDex
f4b1071e23 Update libraries and Rust version
- Updated to Rust v1.64.0
- Updated all libararies
- Updated multer-rs to be based upon the latest version
- Updated Dockerfiles to match the Rust version
2022-09-25 19:03:44 +02:00
BlackDex
18291b6533 Add support for send v2 API endpoints
This PR adds support for the Send v2 API.
It should prevent 404 errors which could cause some issues with some
configurations on some reverse proxies.

In the long run, we can probably remove the old file upload API, but for
now lets leave it there, since Bitwarden also still has this endpoint in
the code.

Might fixes #2753
2022-09-25 19:03:33 +02:00
Aaron
8095cb68bb fix: update warning and success case verbiage 2022-09-25 19:03:33 +02:00
Aaron
04cd751556 fix: tooltip typo 2022-09-25 19:03:33 +02:00
BlackDex
7ce2372f51 Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-25 19:03:32 +02:00
Daniel García
aebda93afe Merge branch 'BlackDex-update-libraries-and-rust' 2022-09-25 19:01:30 +02:00
BlackDex
2b7b1141eb Add support for send v2 API endpoints
This PR adds support for the Send v2 API.
It should prevent 404 errors which could cause some issues with some
configurations on some reverse proxies.

In the long run, we can probably remove the old file upload API, but for
now lets leave it there, since Bitwarden also still has this endpoint in
the code.

Might fixes #2753
2022-09-25 18:59:26 +02:00
Aaron
1ff4ff72bf fix: update warning and success case verbiage 2022-09-25 18:59:25 +02:00
Aaron
d27e91a9b0 fix: tooltip typo 2022-09-25 18:59:25 +02:00
BlackDex
7cf063b196 Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-25 18:59:25 +02:00
Daniel García
642f04d493 Merge branch 'BlackDex-add-send-api-v2' 2022-09-25 18:58:48 +02:00
Aaron
fc6e65e4b0 fix: update warning and success case verbiage 2022-09-25 18:58:41 +02:00
Aaron
db5c98ec3b fix: tooltip typo 2022-09-25 18:58:40 +02:00
BlackDex
73c64af27e Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-25 18:58:40 +02:00
Daniel García
b3f7db813f Merge branch 'djbrownbear-fix-diagnostic-typo' 2022-09-25 18:55:10 +02:00
BlackDex
59660ff087 Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-25 18:55:04 +02:00
Daniel García
69a69e8e04 Merge branch 'BlackDex-update-workflow' 2022-09-25 18:54:54 +02:00
BlackDex
1094f359c3 Update libraries and Rust version
- Updated to Rust v1.64.0
- Updated all libararies
- Updated multer-rs to be based upon the latest version
- Updated Dockerfiles to match the Rust version
2022-09-25 16:44:34 +02:00
Stefan Melmuk
102ee3f871 add api_not_found catcher for 404 errors in /api 2022-09-25 10:59:01 +02:00
Stefan Melmuk
acb5ab08a8 add not_found catcher for 404 errors 2022-09-25 04:02:16 +02:00
BlackDex
ae59472d9a Fix organization vault export
Since v2022.9.x it seems they changed the export endpoint and way of working.
This PR fixes this by adding the export endpoint.

Also, it looks like the clients can't handle uppercase first JSON key's.
Because of this there now is a function which converts all the key's to lowercase first.

I have an issue reported at Bitwarden if this is expected behavior: https://github.com/bitwarden/clients/issues/3606

Fixes #2760
Fixes #2764
2022-09-24 18:27:13 +02:00
BlackDex
5a07b193dc Add support for send v2 API endpoints
This PR adds support for the Send v2 API.
It should prevent 404 errors which could cause some issues with some
configurations on some reverse proxies.

In the long run, we can probably remove the old file upload API, but for
now lets leave it there, since Bitwarden also still has this endpoint in
the code.

Might fixes #2753
2022-09-22 19:40:04 +02:00
Aaron
fd2edb9adc fix: update warning and success case verbiage 2022-09-16 10:32:36 -07:00
Aaron
1d074f7b3f fix: tooltip typo 2022-09-15 15:36:21 -07:00
BlackDex
81984c4bce Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-15 16:51:52 +02:00
Daniel García
9c891baad1 Merge pull request #2739 from BlackDex/fix-restore-revoke
Rename/Fix revoke/restore endpoints
2022-09-12 17:12:23 +02:00
Daniel García
b050c60807 Merge pull request #2738 from BlackDex/issue-2737
Fix issue 2737, unable to create org
2022-09-12 17:11:43 +02:00
BlackDex
e47a2fd0f3 Rename/Fix revoke/restore endpoints
In web-vault v2022.9.x it seems the endpoints changed.
 - activate > restore
 - deactivate > revoke

This PR adds those endpoints and renames the functions.
It also keeps the previous endpoints for now to be compatible with
previous vault verions for now, just in case.
2022-09-12 16:08:36 +02:00
BlackDex
42b9cc73ac Fix issue 2737, unable to create org
There was a small oversight on upgrading to v2022.9.0 web-vault version.
It seems the call to the /plans/ endpoint doesn't provide authentication anymore.

Removed this check and it seems to work again.

Fixes #2737
2022-09-12 14:10:54 +02:00
Daniel García
edca4248aa Use optional env as this variable isn't defined during CI 2022-09-08 18:01:27 +02:00
Daniel García
b1b6bc9be0 Update web vault to 2022.9.0 2022-09-08 17:46:02 +02:00
Daniel García
818b254cef Implement config endpoint 2022-09-08 17:38:00 +02:00
Daniel García
ddfac5e34b Merge branch 'BlackDex-web-vault-v2022.9-support' into main 2022-09-08 16:30:46 +02:00
Daniel García
8b5c945bad Merge branch 'web-vault-v2022.9-support' of https://github.com/BlackDex/vaultwarden into BlackDex-web-vault-v2022.9-support 2022-09-08 16:30:41 +02:00
Daniel García
50c5eb9c50 Merge branch 'BlackDex-vw-admin-updates' into main 2022-09-08 16:30:31 +02:00
BlackDex
94be67eac1 Added support for web-vault v2022.9
- The new web-vault version supports fastmail.com anon email, add the
  correct api host to support it.
- Removed Firefox Relay, this seems only to be supported on SaaS.
- Added a function to the two-factor api to prevent 404 errors.
2022-09-07 20:48:48 +02:00
BlackDex
5a05139efe Change the handling of login errors.
Previously FlashMessage was used to provide an error message during login.
This PR changes that flow to not use redirect for this, but renders the HTML and responds using the correct status code where needed. This should solve some issues which were reported in the past.

Thanks to @RealOrangeOne, for initiating this with a PR.

Fixes #2448
Fixes #2712
Closes #2715

Co-authored-by: Jake Howard <git@theorangeone.net>
2022-09-06 17:27:20 +02:00
Daniel García
a62dc102fb Update web vault to 2022.8.1 and cargo dependencies 2022-09-04 23:18:27 +02:00
Daniel García
518d74ce21 Merge branch 'BlackDex-org-user-revoke-access' into main 2022-09-04 23:04:22 +02:00
Daniel García
7598997deb Merge branch 'org-user-revoke-access' of https://github.com/BlackDex/vaultwarden into BlackDex-org-user-revoke-access 2022-09-04 23:04:15 +02:00
Daniel García
3c876dc202 Merge branch 'Fvbor-patch-1' into main 2022-09-04 22:49:19 +02:00
BlackDex
1722742ab3 Add Org user revoke feature
This PR adds a the new v2022.8.x revoke feature which allows an
organization owner or admin to revoke access for one or more users.

This PR also fixes several permissions and policy checks which were faulty.

- Modified some functions to use DB Count features instead of iter/count aftwards.
- Rearanged some if statements (faster matching or just one if instead of nested if's)
- Added and fixed several policy checks where needed
- Some small updates on some response models
- Made some functions require an enum instead of an i32
2022-08-20 16:42:36 +02:00
Hagen Tasche
d9c0eb3cfc Update two external Links to prevent tabnabbing
Added noopener to prevent tabnabbing
2022-08-17 08:14:19 +02:00
Hagen Tasche
0d990e1dc0 Open Externallink in new Tab
The link to the backup documentation was opened in the active tab.
With this change it will open in a new tab and prevent tabnabbing
2022-08-17 08:00:46 +02:00
Daniel García
60ed5ff99d Merge pull request #2675 from BlackDex/patch-multer
Fix uploads from mobile clients (and dep updates)
2022-08-04 23:47:48 +02:00
BlackDex
5b98bd66ee Fix uploads from mobile clients (and dep updates)
This patch fixes the file upload send by the mobile clients.
It resolves #2644 by always providing a `Content-Type` even though one
isn't set in this specific case.

I do hope it will be fixed upstream by either Bitwarden by fixing the
client. Or Rocket by allowing to override this somehow.

Until then, we can use this patched version of multer-rs.

Issue @ Rocket: https://github.com/SergioBenitez/Rocket/issues/2299
Issue @ Bitwarden: https://github.com/bitwarden/mobile/issues/2018

Also updated some dependencies.
2022-08-04 23:28:45 +02:00
Daniel García
abd20777fe Merge pull request #2665 from BlackDex/update-deps-and-alpine-base 2022-08-01 23:48:14 +02:00
BlackDex
7f0d0cf8a4 Update MSRV to 1.60.0
The latest version of chrono-tz needs 1.60.0 because of phf.
Since chrono-tz has updated timezone information i do think it is
usefull in some cases around the world.
2022-08-01 16:21:06 +02:00
BlackDex
6e23a573fb Update deps and Alpine image
- Updated deps
- Updated Alpine images to 3.16
- Removed dumb-init, not needed anymore
- Some small shellcheck tweaks on the start/healthcheck scripts
2022-07-31 15:45:31 +02:00
Daniel García
ce9d93003c Merge pull request #2650 from BlackDex/mitigate-mobile-client-uploads
Mitigate attachment/send upload issues
2022-07-27 17:39:07 +02:00
BlackDex
abfa868423 Mitigate attachment/send upload issues
This PR attends to mitigate (not fix) #2644.
There seems to be an issue when uploading files either as attachment or
via send via the mobile (Android) client.

The binary data gets transfered correctly to Vaultwarden (Checked via
Wireshark), but the data is not parsed correctly for some reason.

Since the parsing is not done by Vaultwarden it self, i think we should
at least try to prevent saving the data and letting users think all
fine.

Further investigation is needed to actually fix this issue.
This is just a quick patch.
2022-07-27 17:12:04 +02:00
Daniel García
331f6c08fe Merge branch 'BlackDex-update-github-actions' into main 2022-07-22 16:00:45 +02:00
Daniel García
c0efd3d419 Merge branch 'update-github-actions' of https://github.com/BlackDex/vaultwarden into BlackDex-update-github-actions 2022-07-22 16:00:40 +02:00
Daniel García
1385d75972 Merge branch 'BlackDex-fix-2622-persistent-volume-check' into main 2022-07-22 16:00:28 +02:00
BlackDex
9a787dd105 Fix persistent folder check within containers
The previous persistent folder check worked by checking if a file
exists. If you used a bind-mount, then this file is not there. But when
using a docker/podman volume those files are copied, and caused the
container to not start.

This change checks the `/proc/self/mountinfo` for a specific patern to
see if the data folder is persistent or not.

Fixes #2622
2022-07-20 13:29:39 +02:00
BlackDex
0dcc435bb4 Update build workflow for CI
Because we want to support MSRV, we also need to run a CI for this.
This PR adds checks for the MSRV and rust-toolchain defined versions.

It will also run all cargo test, clippy and fmt checks no matter the outcome of the previous job.
This will help when there are multiple issues, like clippy errors and formatting.
Previously it would show only the first failed check and stopped.

It will also output a nice step summary with some details on which checks have failed.
Or it will output a success message.
2022-07-19 23:17:49 +02:00
Daniel García
f1a67663d1 Merge pull request #2624 from BlackDex/fix-2623-csp-icon-redirect
Fix issue with CSP and icon redirects
2022-07-18 00:40:59 +02:00
BlackDex
0f95bdc9bb Fix issue with CSP and icon redirects
When using anything else but the `internal` icon service it would
trigger an CSP block because the redirects were not allowed.

This PR fixes #2623 by dynamically adding the needed CSP strings.
This should also work with custom services.

For Google i needed to add an extra check because that does a redirect
it self to there gstatic.com domain.
2022-07-17 16:21:03 +02:00
116 changed files with 9108 additions and 4964 deletions

View File

@@ -1,13 +1,14 @@
# shellcheck disable=SC2034,SC2148
## Vaultwarden Configuration File
## Uncomment any of the following lines to change the defaults
##
## Be aware that most of these settings will be overridden if they were changed
## in the admin interface. Those overrides are stored within DATA_FOLDER/config.json .
##
## By default, vaultwarden expects for this file to be named ".env" and located
## By default, Vaultwarden expects for this file to be named ".env" and located
## in the current working directory. If this is not the case, the environment
## variable ENV_FILE can be set to the location of this file prior to starting
## vaultwarden.
## Vaultwarden.
## Main data folder
# DATA_FOLDER=data
@@ -80,11 +81,34 @@
## This setting applies globally to all users.
# EMERGENCY_ACCESS_ALLOWED=true
## Controls whether event logging is enabled for organizations
## This setting applies to organizations.
## Disabled by default. Also check the EVENT_CLEANUP_SCHEDULE and EVENTS_DAYS_RETAIN settings.
# ORG_EVENTS_ENABLED=false
## Number of days to retain events stored in the database.
## If unset (the default), events are kept indefinitely and the scheduled job is disabled!
# EVENTS_DAYS_RETAIN=
## BETA FEATURE: Groups
## Controls whether group support is enabled for organizations
## This setting applies to organizations.
## Disabled by default because this is a beta feature, it contains known issues!
## KNOW WHAT YOU ARE DOING!
# ORG_GROUPS_ENABLED=false
## Job scheduler settings
##
## Job schedules use a cron-like syntax (as parsed by https://crates.io/crates/cron),
## and are always in terms of UTC time (regardless of your local time zone settings).
##
## The schedule format is a bit different from crontab as crontab does not contains seconds.
## You can test the the format here: https://crontab.guru, but remove the first digit!
## SEC MIN HOUR DAY OF MONTH MONTH DAY OF WEEK
## "0 30 9,12,15 1,15 May-Aug Mon,Wed,Fri"
## "0 30 * * * * "
## "0 30 1 * * * "
##
## How often (in ms) the job scheduler thread checks for jobs that need running.
## Set to 0 to globally disable scheduled jobs.
# JOB_POLL_INTERVAL_MS=30000
@@ -102,12 +126,16 @@
# INCOMPLETE_2FA_SCHEDULE="30 * * * * *"
##
## Cron schedule of the job that sends expiration reminders to emergency access grantors.
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
# EMERGENCY_NOTIFICATION_REMINDER_SCHEDULE="0 5 * * * *"
## Defaults to hourly (3 minutes after the hour). Set blank to disable this job.
# EMERGENCY_NOTIFICATION_REMINDER_SCHEDULE="0 3 * * * *"
##
## Cron schedule of the job that grants emergency access requests that have met the required wait time.
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
# EMERGENCY_REQUEST_TIMEOUT_SCHEDULE="0 5 * * * *"
## Defaults to hourly (7 minutes after the hour). Set blank to disable this job.
# EMERGENCY_REQUEST_TIMEOUT_SCHEDULE="0 7 * * * *"
##
## Cron schedule of the job that cleans old events from the event table.
## Defaults to daily. Set blank to disable this job. Also without EVENTS_DAYS_RETAIN set, this job will not start.
# EVENT_CLEANUP_SCHEDULE="0 10 0 * * *"
## Enable extended logging, which shows timestamps and targets in the logs
# EXTENDED_LOGGING=true
@@ -133,7 +161,7 @@
## Enable WAL for the DB
## Set to false to avoid enabling WAL during startup.
## Note that if the DB already has WAL enabled, you will also need to disable WAL in the DB,
## this setting only prevents vaultwarden from automatically enabling it on start.
## this setting only prevents Vaultwarden from automatically enabling it on start.
## Please read project wiki page about this setting first before changing the value as it can
## cause performance degradation or might render the service unable to start.
# ENABLE_DB_WAL=true
@@ -245,6 +273,10 @@
## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Vaultwarden
## The number of hours after which an organization invite token, emergency access invite token,
## email verification token and deletion request token will expire (must be at least 1)
# INVITATION_EXPIRATION_HOURS=120
## Per-organization attachment storage limit (KB)
## Max kilobytes of attachment storage allowed per organization.
## When this limit is reached, organization members will not be allowed to upload further attachments for ciphers owned by that organization.
@@ -298,7 +330,7 @@
## Note that this applies to both the login and the 2FA, so it's recommended to allow a burst size of at least 2.
# LOGIN_RATELIMIT_MAX_BURST=10
## Number of seconds, on average, between admin requests from the same IP address before rate limiting kicks in.
## Number of seconds, on average, between admin login requests from the same IP address before rate limiting kicks in.
# ADMIN_RATELIMIT_SECONDS=300
## Allow a burst of requests of up to this size, while maintaining the average indicated by `ADMIN_RATELIMIT_SECONDS`.
# ADMIN_RATELIMIT_MAX_BURST=3
@@ -348,7 +380,7 @@
# SMTP_FROM=vaultwarden@domain.tld
# SMTP_FROM_NAME=Vaultwarden
# SMTP_SECURITY=starttls # ("starttls", "force_tls", "off") Enable a secure connection. Default is "starttls" (Explicit - ports 587 or 25), "force_tls" (Implicit - port 465) or "off", no encryption (port 25)
# SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 is outdated and used with Implicit TLS.
# SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 (submissions) is used for encrypted submission (Implicit TLS).
# SMTP_USERNAME=username
# SMTP_PASSWORD=password
# SMTP_TIMEOUT=15
@@ -363,6 +395,9 @@
## but might need to be changed in case it trips some anti-spam filters
# HELO_NAME=
## Embed images as email attachments
# SMTP_EMBED_IMAGES=false
## SMTP debugging
## When set to true this will output very detailed SMTP messages.
## WARNING: This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!

1
.github/FUNDING.yml vendored
View File

@@ -1,2 +1,3 @@
github: dani-garcia
liberapay: dani-garcia
custom: ["https://paypal.me/DaniGG"]

View File

@@ -20,6 +20,8 @@ on:
jobs:
build:
runs-on: ubuntu-20.04
timeout-minutes: 120
# Make warnings errors, this is to prevent warnings slipping through.
# This is done globally to prevent rebuilds when the RUSTFLAGS env variable changes.
env:
@@ -28,118 +30,163 @@ jobs:
fail-fast: false
matrix:
channel:
- stable
target-triple:
- x86_64-unknown-linux-gnu
- "rust-toolchain" # The version defined in rust-toolchain
- "msrv" # The supported MSRV
include:
- target-triple: x86_64-unknown-linux-gnu
host-triple: x86_64-unknown-linux-gnu
features: [sqlite,mysql,postgresql,enable_mimalloc] # Remember to update the `cargo test` to match the amount of features
channel: stable
os: ubuntu-20.04
ext: ""
- channel: "msrv"
version: "1.60.0"
name: Build and Test ${{ matrix.channel }}
name: Building ${{ matrix.channel }}-${{ matrix.target-triple }}
runs-on: ${{ matrix.os }}
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # v3.0.2
- name: "Checkout"
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
# End Checkout the repo
# Install musl-tools when needed
- name: Install musl tools
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends musl-dev musl-tools cmake
if: matrix.target-triple == 'x86_64-unknown-linux-musl'
# End Install musl-tools when needed
# Install dependencies
- name: Install dependencies Ubuntu
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends openssl sqlite build-essential libmariadb-dev-compat libpq-dev libssl-dev pkgconf
if: startsWith( matrix.os, 'ubuntu' )
- name: "Install dependencies Ubuntu"
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends openssl sqlite build-essential libmariadb-dev-compat libpq-dev libssl-dev pkg-config
# End Install dependencies
# Enable Rust Caching
- uses: Swatinem/rust-cache@842ef286fff290e445b90b4002cc9807c3669641 # v1.3.0
# End Enable Rust Caching
# Determine rust-toolchain version
- name: Init Variables
id: toolchain
shell: bash
if: ${{ matrix.channel == 'rust-toolchain' }}
run: |
RUST_TOOLCHAIN="$(cat rust-toolchain)"
echo "RUST_TOOLCHAIN=${RUST_TOOLCHAIN}" | tee -a "${GITHUB_OUTPUT}"
# End Determine rust-toolchain version
# Uses the rust-toolchain file to determine version
- name: 'Install ${{ matrix.channel }}-${{ matrix.host-triple }} for target: ${{ matrix.target-triple }}'
uses: actions-rs/toolchain@b2417cde72dcf67f306c0ae8e0828a81bf0b189f # v1.0.6
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@55c7845fad90d0ae8b2e83715cb900e5e861e8cb # master @ 2022-10-25 - 21:40 GMT+2
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
profile: minimal
target: ${{ matrix.target-triple }}
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
components: clippy, rustfmt
# End Uses the rust-toolchain file to determine version
# Install the MSRV channel to be used
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@55c7845fad90d0ae8b2e83715cb900e5e861e8cb # master @ 2022-10-25 - 21:40 GMT+2
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: ${{ matrix.version }}
# End Install the MSRV channel to be used
# Enable Rust Caching
- uses: Swatinem/rust-cache@359a70e43a0bb8a13953b04a90f76428b4959bb6 # v2.2.0
# End Enable Rust Caching
# Show environment
- name: "Show environment"
run: |
rustc -vV
cargo -vV
# End Show environment
# Run cargo tests (In release mode to speed up future builds)
# First test all features together, afterwards test them separately.
- name: "`cargo test --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: test
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
# Test single features
# 0: sqlite
- name: "`cargo test --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: test
args: --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[0] != '' }}
# 1: mysql
- name: "`cargo test --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: test
args: --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[1] != '' }}
# 2: postgresql
- name: "`cargo test --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: test
args: --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[2] != '' }}
- name: "test features: sqlite,mysql,postgresql,enable_mimalloc"
id: test_sqlite_mysql_postgresql_mimalloc
if: $${{ always() }}
run: |
cargo test --release --features sqlite,mysql,postgresql,enable_mimalloc
- name: "test features: sqlite,mysql,postgresql"
id: test_sqlite_mysql_postgresql
if: $${{ always() }}
run: |
cargo test --release --features sqlite,mysql,postgresql
- name: "test features: sqlite"
id: test_sqlite
if: $${{ always() }}
run: |
cargo test --release --features sqlite
- name: "test features: mysql"
id: test_mysql
if: $${{ always() }}
run: |
cargo test --release --features mysql
- name: "test features: postgresql"
id: test_postgresql
if: $${{ always() }}
run: |
cargo test --release --features postgresql
# End Run cargo tests
# Run cargo clippy, and fail on warnings (In release mode to speed up future builds)
- name: "`cargo clippy --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: clippy
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }} -- -D warnings
- name: "clippy features: sqlite,mysql,postgresql,enable_mimalloc"
id: clippy
if: ${{ always() && matrix.channel == 'rust-toolchain' }}
run: |
cargo clippy --release --features sqlite,mysql,postgresql,enable_mimalloc -- -D warnings
# End Run cargo clippy
# Run cargo fmt
- name: '`cargo fmt`'
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: fmt
args: --all -- --check
# Run cargo fmt (Only run on rust-toolchain defined version)
- name: "check formatting"
id: formatting
if: ${{ always() && matrix.channel == 'rust-toolchain' }}
run: |
cargo fmt --all -- --check
# End Run cargo fmt
# Build the binary
- name: "`cargo build --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: build
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
# Check for any previous failures, if there are stop, else continue.
# This is useful so all test/clippy/fmt actions are done, and they can all be addressed
- name: "Some checks failed"
if: ${{ failure() }}
run: |
echo "### :x: Checks Failed!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "|Job|Status|" >> $GITHUB_STEP_SUMMARY
echo "|---|------|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite,mysql,postgresql,enable_mimalloc)|${{ steps.test_sqlite_mysql_postgresql_mimalloc.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite,mysql,postgresql)|${{ steps.test_sqlite_mysql_postgresql.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite)|${{ steps.test_sqlite.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (mysql)|${{ steps.test_mysql.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (postgresql)|${{ steps.test_postgresql.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|clippy (sqlite,mysql,postgresql,enable_mimalloc)|${{ steps.clippy.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|fmt|${{ steps.formatting.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "Please check the failed jobs and fix where needed." >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
exit 1
# Check for any previous failures, if there are stop, else continue.
# This is useful so all test/clippy/fmt actions are done, and they can all be addressed
- name: "All checks passed"
if: ${{ success() }}
run: |
echo "### :tada: Checks Passed!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
# Build the binary to upload to the artifacts
- name: "build features: sqlite,mysql,postgresql"
if: ${{ matrix.channel == 'rust-toolchain' }}
run: |
cargo build --release --features sqlite,mysql,postgresql
# End Build the binary
# Upload artifact to Github Actions
- name: Upload artifact
uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8 # v3.1.0
- name: "Upload artifact"
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # v3.1.1
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
name: vaultwarden
path: target/release/vaultwarden
# End Upload artifact to Github Actions

View File

@@ -1,22 +1,19 @@
name: Hadolint
on:
push:
paths:
- "docker/**"
pull_request:
paths:
- "docker/**"
on: [
push,
pull_request
]
jobs:
hadolint:
name: Validate Dockerfile syntax
runs-on: ubuntu-20.04
timeout-minutes: 30
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # v3.0.2
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
# End Checkout the repo
@@ -27,7 +24,7 @@ jobs:
sudo curl -L https://github.com/hadolint/hadolint/releases/download/v${HADOLINT_VERSION}/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \
sudo chmod +x /usr/local/bin/hadolint
env:
HADOLINT_VERSION: 2.10.0
HADOLINT_VERSION: 2.12.0
# End Download hadolint
# Test Dockerfiles

View File

@@ -24,21 +24,22 @@ jobs:
# Some checks to determine if we need to continue with building a new docker.
# We will skip this check if we are creating a tag, because that has the same hash as a previous run already.
skip_check:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
steps:
- name: Skip Duplicates Actions
id: skip_check
uses: fkirc/skip-duplicate-actions@9d116fa7e55f295019cfab7e3ab72b478bcf7fdd # v4.0.0
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281 # v5.3.0
with:
cancel_others: 'true'
# Only run this when not creating a tag
if: ${{ startsWith(github.ref, 'refs/heads/') }}
docker-build:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 120
needs: skip_check
# Start a local docker registry to be used to generate multi-arch images.
services:
@@ -60,13 +61,13 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # v3.0.2
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
with:
fetch-depth: 0
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@49ed152c8eca782a232dede0303416e8f356c37b # v2.0.0
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
@@ -78,11 +79,9 @@ jobs:
run: |
# Check which main tag we are going to build determined by github.ref
if [[ "${{ github.ref }}" == refs/tags/* ]]; then
echo "set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
echo "::set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
echo "DOCKER_TAG=${GITHUB_REF#refs/*/}" | tee -a "${GITHUB_OUTPUT}"
elif [[ "${{ github.ref }}" == refs/heads/* ]]; then
echo "set-output name=DOCKER_TAG::testing"
echo "::set-output name=DOCKER_TAG::testing"
echo "DOCKER_TAG=testing" | tee -a "${GITHUB_OUTPUT}"
fi
# End Determine Docker Tag

View File

@@ -27,7 +27,7 @@ repos:
language: system
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--"]
types_or: [rust, file]
files: (Cargo.toml|Cargo.lock|.*\.rs$)
files: (Cargo.toml|Cargo.lock|rust-toolchain|.*\.rs$)
pass_filenames: false
- id: cargo-clippy
name: cargo clippy
@@ -36,5 +36,5 @@ repos:
language: system
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--", "-D", "warnings"]
types_or: [rust, file]
files: (Cargo.toml|Cargo.lock|.*\.rs$)
files: (Cargo.toml|Cargo.lock|rust-toolchain|.*\.rs$)
pass_filenames: false

1315
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@ name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021"
rust-version = "1.59"
rust-version = "1.60.0"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
@@ -24,6 +24,11 @@ vendored_openssl = ["openssl/vendored"]
# Enable MiMalloc memory allocator to replace the default malloc
# This can improve performance for Alpine builds
enable_mimalloc = ["mimalloc"]
# This is a development dependency, and should only be used during development!
# It enables the usage of the diesel_logger crate, which is able to output the generated queries.
# You also need to set an env variable `QUERY_LOGGER=1` to fully activate this so you do not have to re-compile
# if you want to turn off the logging for a specific run.
query_logger = ["diesel_logger"]
# Enable unstable features, requires nightly
# Currently only used to enable rusts official ip support
@@ -37,15 +42,15 @@ syslog = "6.0.1" # Needs to be v4 until fern is updated
# Logging
log = "0.4.17"
fern = { version = "0.6.1", features = ["syslog-6"] }
tracing = { version = "0.1.35", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
tracing = { version = "0.1.37", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
backtrace = "0.3.66" # Logging panics to logfile instead stderr only
backtrace = "0.3.67" # Logging panics to logfile instead stderr only
# A `dotenv` implementation for Rust
dotenvy = { version = "0.15.1", default-features = false }
dotenvy = { version = "0.15.6", default-features = false }
# Lazy initialization
once_cell = "1.13.0"
once_cell = "1.16.0"
# Numerical libraries
num-traits = "0.2.15"
@@ -55,45 +60,46 @@ num-derive = "0.3.3"
rocket = { version = "0.5.0-rc.2", features = ["tls", "json"], default-features = false }
# WebSockets libraries
tokio-tungstenite = "0.17.2"
tokio-tungstenite = "0.18.0"
rmpv = "1.0.0" # MessagePack library
dashmap = "5.3.4" # Concurrent hashmap implementation
dashmap = "5.4.0"
# Async futures
futures = "0.3.21"
tokio = { version = "1.20.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time"] }
futures = "0.3.25"
tokio = { version = "1.23.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal"] }
# A generic serialization/deserialization framework
serde = { version = "1.0.139", features = ["derive"] }
serde_json = "1.0.82"
serde = { version = "1.0.150", features = ["derive"] }
serde_json = "1.0.89"
# A safe, extensible ORM and Query builder
diesel = { version = "1.4.8", features = ["chrono", "r2d2"] }
diesel_migrations = "1.4.0"
diesel = { version = "2.0.2", features = ["chrono", "r2d2"] }
diesel_migrations = "2.0.0"
diesel_logger = { version = "0.2.0", optional = true }
# Bundled SQLite
libsqlite3-sys = { version = "0.22.2", features = ["bundled"], optional = true }
libsqlite3-sys = { version = "0.25.2", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = { version = "0.8.5", features = ["small_rng"] }
ring = "0.16.20"
# UUID generation
uuid = { version = "1.1.2", features = ["v4"] }
uuid = { version = "1.2.2", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.19", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.6.1"
time = "0.3.11"
chrono = { version = "0.4.23", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.8.1"
time = "0.3.17"
# Job scheduler
job_scheduler_ng = "2.0.1"
job_scheduler_ng = "2.0.3"
# Data encoding library Hex/Base32/Base64
data-encoding = "2.3.2"
data-encoding = "2.3.3"
# JWT library
jsonwebtoken = "8.1.1"
jsonwebtoken = "8.2.0"
# TOTP library
totp-lite = "2.0.0"
@@ -105,45 +111,52 @@ yubico = { version = "0.11.0", features = ["online-tokio"], default-features = f
webauthn-rs = "0.3.2"
# Handling of URL's for WebAuthn
url = "2.2.2"
url = "2.3.1"
# Email librariese-Base, Update crates and small change.
lettre = { version = "0.10.0", features = ["smtp-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.1.0" # URL encoding library used for URL's in the emails
lettre = { version = "0.10.1", features = ["smtp-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.2.0" # URL encoding library used for URL's in the emails
email_address = "0.2.4"
# Template library
handlebars = { version = "4.3.2", features = ["dir_source"] }
handlebars = { version = "4.3.5", features = ["dir_source"] }
# HTTP client
reqwest = { version = "0.11.11", features = ["stream", "json", "gzip", "brotli", "socks", "cookies", "trust-dns"] }
reqwest = { version = "0.11.13", features = ["stream", "json", "gzip", "brotli", "socks", "cookies", "trust-dns"] }
# For favicon extraction from main website
html5gum = "0.5.2"
regex = { version = "1.6.0", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.1.1"
bytes = "1.1.0"
cached = "0.36.0"
regex = { version = "1.7.0", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.2.0"
bytes = "1.3.0"
cached = "0.40.0"
# Used for custom short lived cookie jar during favicon extraction
cookie = "0.16.0"
cookie_store = "0.16.1"
cookie = "0.16.1"
cookie_store = "0.19.0"
# Used by U2F, JWT and Postgres
openssl = "0.10.41"
openssl = "0.10.44"
# CLI argument parsing
pico-args = "0.5.0"
# Macro ident concatenation
paste = "1.0.7"
governor = "0.4.2"
paste = "1.0.10"
governor = "0.5.1"
# Capture CTRL+C
ctrlc = { version = "3.2.2", features = ["termination"] }
# Check client versions for specific features.
semver = "1.0.14"
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.29", features = ["secure"], default-features = false, optional = true }
mimalloc = { version = "0.1.32", features = ["secure"], default-features = false, optional = true }
[patch.crates-io]
# Using a patched version of multer-rs (Used by Rocket) to fix attachment/send file uploads
# Issue: https://github.com/dani-garcia/vaultwarden/issues/2644
# Patch: https://github.com/BlackDex/multer-rs/commit/477d16b7fa0f361b5c2a5ba18a5b28bec6d26a8a
multer = { git = "https://github.com/BlackDex/multer-rs", rev = "477d16b7fa0f361b5c2a5ba18a5b28bec6d26a8a" }
# Strip debuginfo from the release builds
# Also enable thin LTO for some optimizations

View File

@@ -7,12 +7,12 @@
[![Docker Pulls](https://img.shields.io/docker/pulls/vaultwarden/server.svg)](https://hub.docker.com/r/vaultwarden/server)
[![Dependency Status](https://deps.rs/repo/github/dani-garcia/vaultwarden/status.svg)](https://deps.rs/repo/github/dani-garcia/vaultwarden)
[![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest)
[![GPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/master/LICENSE.txt)
[![GPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/main/LICENSE.txt)
[![Matrix Chat](https://img.shields.io/matrix/vaultwarden:matrix.org.svg?logo=matrix)](https://matrix.to/#/#vaultwarden:matrix.org)
Image is based on [Rust implementation of Bitwarden API](https://github.com/dani-garcia/vaultwarden).
**This project is not associated with the [Bitwarden](https://bitwarden.com/) project nor 8bit Solutions LLC.**
**This project is not associated with the [Bitwarden](https://bitwarden.com/) project nor Bitwarden, Inc.**
#### ⚠️**IMPORTANT**⚠️: When using this server, please report any bugs or suggestions to us directly (look at the bottom of this page for ways to get in touch), regardless of whatever clients you are using (mobile, desktop, browser...). DO NOT use the official support channels.
@@ -58,32 +58,34 @@ If you prefer to chat, we're usually hanging around at [#vaultwarden:matrix.org]
### Sponsors
Thanks for your contribution to the project!
<!--
<table>
<tr>
<td align="center">
<a href="https://github.com/netdadaltd">
<img src="https://avatars.githubusercontent.com/u/77323954?s=75&v=4" width="75px;" alt="netdadaltd"/>
<a href="https://github.com/username">
<img src="https://avatars.githubusercontent.com/u/725423?s=75&v=4" width="75px;" alt="username"/>
<br />
<sub><b>netDada Ltd.</b></sub>
<sub><b>username</b></sub>
</a>
</td>
</tr>
</table>
<br/>
-->
<table>
<tr>
<td align="center">
<a href="https://github.com/Gyarbij" style="width: 75px">
<sub><b>Chono N</b></sub>
<a href="https://github.com/themightychris" style="width: 75px">
<sub><b>Chris Alfano</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://github.com/themightychris">
<sub><b>Chris Alfano</b></sub>
<a href="https://github.com/numberly" style="width: 75px">
<sub><b>Numberly</b></sub>
</a>
</td>
</tr>

View File

@@ -9,20 +9,25 @@ fn main() {
println!("cargo:rustc-cfg=mysql");
#[cfg(feature = "postgresql")]
println!("cargo:rustc-cfg=postgresql");
#[cfg(feature = "query_logger")]
println!("cargo:rustc-cfg=query_logger");
#[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))]
compile_error!(
"You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"
);
#[cfg(all(not(debug_assertions), feature = "query_logger"))]
compile_error!("Query Logging is only allowed during development, it is not intented for production usage!");
// Support $BWRS_VERSION for legacy compatibility, but default to $VW_VERSION.
// If neither exist, read from git.
let maybe_vaultwarden_version =
env::var("VW_VERSION").or_else(|_| env::var("BWRS_VERSION")).or_else(|_| version_from_git_info());
if let Ok(version) = maybe_vaultwarden_version {
println!("cargo:rustc-env=VW_VERSION={}", version);
println!("cargo:rustc-env=CARGO_PKG_VERSION={}", version);
println!("cargo:rustc-env=VW_VERSION={version}");
println!("cargo:rustc-env=CARGO_PKG_VERSION={version}");
}
}
@@ -47,29 +52,29 @@ fn version_from_git_info() -> Result<String, std::io::Error> {
// the current commit doesn't have an associated tag
let exact_tag = run(&["git", "describe", "--abbrev=0", "--tags", "--exact-match"]).ok();
if let Some(ref exact) = exact_tag {
println!("cargo:rustc-env=GIT_EXACT_TAG={}", exact);
println!("cargo:rustc-env=GIT_EXACT_TAG={exact}");
}
// The last available tag, equal to exact_tag when
// the current commit is tagged
let last_tag = run(&["git", "describe", "--abbrev=0", "--tags"])?;
println!("cargo:rustc-env=GIT_LAST_TAG={}", last_tag);
println!("cargo:rustc-env=GIT_LAST_TAG={last_tag}");
// The current branch name
let branch = run(&["git", "rev-parse", "--abbrev-ref", "HEAD"])?;
println!("cargo:rustc-env=GIT_BRANCH={}", branch);
println!("cargo:rustc-env=GIT_BRANCH={branch}");
// The current git commit hash
let rev = run(&["git", "rev-parse", "HEAD"])?;
let rev_short = rev.get(..8).unwrap_or_default();
println!("cargo:rustc-env=GIT_REV={}", rev_short);
println!("cargo:rustc-env=GIT_REV={rev_short}");
// Combined version
if let Some(exact) = exact_tag {
Ok(exact)
} else if &branch != "main" && &branch != "master" {
Ok(format!("{}-{} ({})", last_tag, rev_short, branch))
Ok(format!("{last_tag}-{rev_short} ({branch})"))
} else {
Ok(format!("{}-{}", last_tag, rev_short))
Ok(format!("{last_tag}-{rev_short}"))
}
}

View File

@@ -3,23 +3,23 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
{% set build_stage_base_image = "rust:1.61-bullseye" %}
{% set build_stage_base_image = "rust:1.66-bullseye" %}
{% if "alpine" in target_file %}
{% if "amd64" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:x86_64-musl-stable-1.61.0" %}
{% set runtime_stage_base_image = "alpine:3.15" %}
{% set build_stage_base_image = "blackdex/rust-musl:x86_64-musl-stable-1.66.0" %}
{% set runtime_stage_base_image = "alpine:3.17" %}
{% set package_arch_target = "x86_64-unknown-linux-musl" %}
{% elif "armv7" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:armv7-musleabihf-stable-1.61.0" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.15" %}
{% set build_stage_base_image = "blackdex/rust-musl:armv7-musleabihf-stable-1.66.0" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.17" %}
{% set package_arch_target = "armv7-unknown-linux-musleabihf" %}
{% elif "armv6" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:arm-musleabi-stable-1.61.0" %}
{% set runtime_stage_base_image = "balenalib/rpi-alpine:3.15" %}
{% set build_stage_base_image = "blackdex/rust-musl:arm-musleabi-stable-1.66.0" %}
{% set runtime_stage_base_image = "balenalib/rpi-alpine:3.17" %}
{% set package_arch_target = "arm-unknown-linux-musleabi" %}
{% elif "arm64" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:aarch64-musl-stable-1.61.0" %}
{% set runtime_stage_base_image = "balenalib/aarch64-alpine:3.15" %}
{% set build_stage_base_image = "blackdex/rust-musl:aarch64-musl-stable-1.66.0" %}
{% set runtime_stage_base_image = "balenalib/aarch64-alpine:3.17" %}
{% set package_arch_target = "aarch64-unknown-linux-musl" %}
{% endif %}
{% elif "amd64" in target_file %}
@@ -59,8 +59,8 @@
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
{% set vault_version = "v2022.6.2" %}
{% set vault_image_digest = "sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70" %}
{% set vault_version = "v2022.12.0" %}
{% set vault_image_digest = "sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e" %}
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
@@ -181,14 +181,6 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }}
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
@@ -214,7 +206,6 @@ RUN mkdir /data \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
{% else %}
&& apt-get update && apt-get install -y \
@@ -222,7 +213,6 @@ RUN mkdir /data \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
@@ -250,7 +240,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
{% if package_arch_target is defined %}
COPY --from=build /app/target/{{ package_arch_target }}/release/vaultwarden .
{% else %}
@@ -262,10 +251,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
FROM rust:1.66-bullseye as build
@@ -84,14 +84,6 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
@@ -109,7 +101,6 @@ RUN mkdir /data \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
@@ -124,7 +115,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -132,10 +122,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:x86_64-musl-stable-1.61.0 as build
FROM blackdex/rust-musl:x86_64-musl-stable-1.66.0 as build
@@ -78,18 +78,10 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.15
FROM alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -104,7 +96,6 @@ RUN mkdir /data \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
@@ -116,7 +107,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -124,10 +114,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
FROM rust:1.66-bullseye as build
@@ -84,14 +84,6 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
@@ -109,7 +101,6 @@ RUN mkdir /data \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
@@ -124,7 +115,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -132,10 +122,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:x86_64-musl-stable-1.61.0 as build
FROM blackdex/rust-musl:x86_64-musl-stable-1.66.0 as build
@@ -78,18 +78,10 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.15
FROM alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -104,7 +96,6 @@ RUN mkdir /data \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
@@ -116,7 +107,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -124,10 +114,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
FROM rust:1.66-bullseye as build
@@ -104,14 +104,6 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
@@ -131,7 +123,6 @@ RUN mkdir /data \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
@@ -148,7 +139,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -156,10 +146,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:aarch64-musl-stable-1.61.0 as build
FROM blackdex/rust-musl:aarch64-musl-stable-1.66.0 as build
@@ -78,18 +78,10 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-alpine:3.15
FROM balenalib/aarch64-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -106,7 +98,6 @@ RUN mkdir /data \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
@@ -120,7 +111,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/aarch64-unknown-linux-musl/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -128,10 +118,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
FROM rust:1.66-bullseye as build
@@ -104,14 +104,6 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
@@ -131,7 +123,6 @@ RUN mkdir /data \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
@@ -148,7 +139,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -156,10 +146,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:aarch64-musl-stable-1.61.0 as build
FROM blackdex/rust-musl:aarch64-musl-stable-1.66.0 as build
@@ -78,18 +78,10 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-alpine:3.15
FROM balenalib/aarch64-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -106,7 +98,6 @@ RUN mkdir /data \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
@@ -120,7 +111,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/aarch64-unknown-linux-musl/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -128,10 +118,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
FROM rust:1.66-bullseye as build
@@ -104,14 +104,6 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
@@ -131,7 +123,6 @@ RUN mkdir /data \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
@@ -153,7 +144,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -161,10 +151,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:arm-musleabi-stable-1.61.0 as build
FROM blackdex/rust-musl:arm-musleabi-stable-1.66.0 as build
@@ -80,18 +80,10 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-alpine:3.15
FROM balenalib/rpi-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -108,7 +100,6 @@ RUN mkdir /data \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
@@ -122,7 +113,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/arm-unknown-linux-musleabi/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -130,10 +120,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
FROM rust:1.66-bullseye as build
@@ -104,14 +104,6 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
@@ -131,7 +123,6 @@ RUN mkdir /data \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
@@ -153,7 +144,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -161,10 +151,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:arm-musleabi-stable-1.61.0 as build
FROM blackdex/rust-musl:arm-musleabi-stable-1.66.0 as build
@@ -80,18 +80,10 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-alpine:3.15
FROM balenalib/rpi-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -108,7 +100,6 @@ RUN mkdir /data \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
@@ -122,7 +113,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/arm-unknown-linux-musleabi/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -130,10 +120,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
FROM rust:1.66-bullseye as build
@@ -104,14 +104,6 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
@@ -131,7 +123,6 @@ RUN mkdir /data \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
@@ -148,7 +139,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -156,10 +146,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:armv7-musleabihf-stable-1.61.0 as build
FROM blackdex/rust-musl:armv7-musleabihf-stable-1.66.0 as build
@@ -78,18 +78,10 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-alpine:3.15
FROM balenalib/armv7hf-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -106,7 +98,6 @@ RUN mkdir /data \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
@@ -120,7 +111,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -128,10 +118,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
FROM rust:1.66-bullseye as build
@@ -104,14 +104,6 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
@@ -131,7 +123,6 @@ RUN mkdir /data \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
@@ -148,7 +139,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -156,10 +146,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -16,18 +16,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.6.2
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.6.2
# [vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70]
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70
# [vaultwarden/web-vault:v2022.6.2]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
#
FROM vaultwarden/web-vault@sha256:1dfda41cbddeac5bc59540261fff8defcac37170b5ba02d29c12fa1215498f70 as vault
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:armv7-musleabihf-stable-1.61.0 as build
FROM blackdex/rust-musl:armv7-musleabihf-stable-1.66.0 as build
@@ -78,18 +78,10 @@ RUN touch src/main.rs
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
# Create a special empty file which we check within the application.
# If this file exists, then we exit Vaultwarden to prevent data loss when someone forgets to use volumes.
# If you really really want to use volatile storage you can set the env `I_REALLY_WANT_VOLATILE_STORAGE=true`
# This file should disappear if a volume is mounted on-top of this using a docker volume.
# We run this in the build image and copy it over, because the runtime image could be missing some executables.
# hadolint ignore=DL3059
RUN touch /vaultwarden_docker_persistent_volume_check
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-alpine:3.15
FROM balenalib/armv7hf-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -106,7 +98,6 @@ RUN mkdir /data \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
@@ -120,7 +111,6 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /vaultwarden_docker_persistent_volume_check /data/vaultwarden_docker_persistent_volume_check
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
@@ -128,10 +118,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,8 +2,8 @@
# Use the value of the corresponding env var (if present),
# or a default value otherwise.
: ${DATA_FOLDER:="data"}
: ${ROCKET_PORT:="80"}
: "${DATA_FOLDER:="data"}"
: "${ROCKET_PORT:="80"}"
CONFIG_FILE="${DATA_FOLDER}"/config.json
@@ -45,9 +45,13 @@ if [ -r "${CONFIG_FILE}" ]; then
fi
fi
addr="${ROCKET_ADDRESS}"
if [ -z "${addr}" ] || [ "${addr}" = '0.0.0.0' ] || [ "${addr}" = '::' ]; then
addr='localhost'
fi
base_path="$(get_base_path "${DOMAIN}")"
if [ -n "${ROCKET_TLS}" ]; then
s='s'
fi
curl --insecure --fail --silent --show-error \
"http${s}://localhost:${ROCKET_PORT}${base_path}/alive" || exit 1
"http${s}://${addr}:${ROCKET_PORT}${base_path}/alive" || exit 1

View File

@@ -9,15 +9,15 @@ fi
if [ -d /etc/vaultwarden.d ]; then
for f in /etc/vaultwarden.d/*.sh; do
if [ -r $f ]; then
. $f
if [ -r "${f}" ]; then
. "${f}"
fi
done
elif [ -d /etc/bitwarden_rs.d ]; then
echo "### You are using the old /etc/bitwarden_rs.d script directory, please migrate to /etc/vaultwarden.d ###"
for f in /etc/bitwarden_rs.d/*.sh; do
if [ -r $f ]; then
. $f
if [ -r "${f}" ]; then
. "${f}"
fi
done
fi

View File

@@ -0,0 +1,3 @@
DROP TABLE `groups`;
DROP TABLE groups_users;
DROP TABLE collections_groups;

View File

@@ -0,0 +1,23 @@
CREATE TABLE `groups` (
uuid CHAR(36) NOT NULL PRIMARY KEY,
organizations_uuid VARCHAR(40) NOT NULL REFERENCES organizations (uuid),
name VARCHAR(100) NOT NULL,
access_all BOOLEAN NOT NULL,
external_id VARCHAR(300) NULL,
creation_date DATETIME NOT NULL,
revision_date DATETIME NOT NULL
);
CREATE TABLE groups_users (
groups_uuid CHAR(36) NOT NULL REFERENCES `groups` (uuid),
users_organizations_uuid VARCHAR(36) NOT NULL REFERENCES users_organizations (uuid),
UNIQUE (groups_uuid, users_organizations_uuid)
);
CREATE TABLE collections_groups (
collections_uuid VARCHAR(40) NOT NULL REFERENCES collections (uuid),
groups_uuid CHAR(36) NOT NULL REFERENCES `groups` (uuid),
read_only BOOLEAN NOT NULL,
hide_passwords BOOLEAN NOT NULL,
UNIQUE (collections_uuid, groups_uuid)
);

View File

@@ -0,0 +1 @@
DROP TABLE event;

View File

@@ -0,0 +1,19 @@
CREATE TABLE event (
uuid CHAR(36) NOT NULL PRIMARY KEY,
event_type INTEGER NOT NULL,
user_uuid CHAR(36),
org_uuid CHAR(36),
cipher_uuid CHAR(36),
collection_uuid CHAR(36),
group_uuid CHAR(36),
org_user_uuid CHAR(36),
act_user_uuid CHAR(36),
device_type INTEGER,
ip_address TEXT,
event_date DATETIME NOT NULL,
policy_uuid CHAR(36),
provider_uuid CHAR(36),
provider_user_uuid CHAR(36),
provider_org_uuid CHAR(36),
UNIQUE (uuid)
);

View File

@@ -0,0 +1,3 @@
DROP TABLE groups;
DROP TABLE groups_users;
DROP TABLE collections_groups;

View File

@@ -0,0 +1,23 @@
CREATE TABLE groups (
uuid CHAR(36) NOT NULL PRIMARY KEY,
organizations_uuid VARCHAR(40) NOT NULL REFERENCES organizations (uuid),
name VARCHAR(100) NOT NULL,
access_all BOOLEAN NOT NULL,
external_id VARCHAR(300) NULL,
creation_date TIMESTAMP NOT NULL,
revision_date TIMESTAMP NOT NULL
);
CREATE TABLE groups_users (
groups_uuid CHAR(36) NOT NULL REFERENCES groups (uuid),
users_organizations_uuid VARCHAR(36) NOT NULL REFERENCES users_organizations (uuid),
PRIMARY KEY (groups_uuid, users_organizations_uuid)
);
CREATE TABLE collections_groups (
collections_uuid VARCHAR(40) NOT NULL REFERENCES collections (uuid),
groups_uuid CHAR(36) NOT NULL REFERENCES groups (uuid),
read_only BOOLEAN NOT NULL,
hide_passwords BOOLEAN NOT NULL,
PRIMARY KEY (collections_uuid, groups_uuid)
);

View File

@@ -0,0 +1 @@
DROP TABLE event;

View File

@@ -0,0 +1,19 @@
CREATE TABLE event (
uuid CHAR(36) NOT NULL PRIMARY KEY,
event_type INTEGER NOT NULL,
user_uuid CHAR(36),
org_uuid CHAR(36),
cipher_uuid CHAR(36),
collection_uuid CHAR(36),
group_uuid CHAR(36),
org_user_uuid CHAR(36),
act_user_uuid CHAR(36),
device_type INTEGER,
ip_address TEXT,
event_date TIMESTAMP NOT NULL,
policy_uuid CHAR(36),
provider_uuid CHAR(36),
provider_user_uuid CHAR(36),
provider_org_uuid CHAR(36),
UNIQUE (uuid)
);

View File

@@ -0,0 +1,3 @@
DROP TABLE groups;
DROP TABLE groups_users;
DROP TABLE collections_groups;

View File

@@ -0,0 +1,23 @@
CREATE TABLE groups (
uuid TEXT NOT NULL PRIMARY KEY,
organizations_uuid TEXT NOT NULL REFERENCES organizations (uuid),
name TEXT NOT NULL,
access_all BOOLEAN NOT NULL,
external_id TEXT NULL,
creation_date TIMESTAMP NOT NULL,
revision_date TIMESTAMP NOT NULL
);
CREATE TABLE groups_users (
groups_uuid TEXT NOT NULL REFERENCES groups (uuid),
users_organizations_uuid TEXT NOT NULL REFERENCES users_organizations (uuid),
UNIQUE (groups_uuid, users_organizations_uuid)
);
CREATE TABLE collections_groups (
collections_uuid TEXT NOT NULL REFERENCES collections (uuid),
groups_uuid TEXT NOT NULL REFERENCES groups (uuid),
read_only BOOLEAN NOT NULL,
hide_passwords BOOLEAN NOT NULL,
UNIQUE (collections_uuid, groups_uuid)
);

View File

@@ -0,0 +1 @@
DROP TABLE event;

View File

@@ -0,0 +1,19 @@
CREATE TABLE event (
uuid TEXT NOT NULL PRIMARY KEY,
event_type INTEGER NOT NULL,
user_uuid TEXT,
org_uuid TEXT,
cipher_uuid TEXT,
collection_uuid TEXT,
group_uuid TEXT,
org_user_uuid TEXT,
act_user_uuid TEXT,
device_type INTEGER,
ip_address TEXT,
event_date DATETIME NOT NULL,
policy_uuid TEXT,
provider_uuid TEXT,
provider_user_uuid TEXT,
provider_org_uuid TEXT,
UNIQUE (uuid)
);

93
resources/404.svg Normal file
View File

@@ -0,0 +1,93 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="500"
height="222"
viewBox="0 0 500 222"
version="1.1"
id="svg5"
xml:space="preserve"
inkscape:version="1.2.1 (9c6d41e410, 2022-07-14, custom)"
sodipodi:docname="404.svg"
inkscape:export-filename="404.png"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><sodipodi:namedview
id="namedview7"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:showpageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1"
inkscape:document-units="px"
showgrid="false"
inkscape:zoom="1.3791767"
inkscape:cx="284.59007"
inkscape:cy="214.25826"
inkscape:window-width="1916"
inkscape:window-height="1038"
inkscape:window-x="0"
inkscape:window-y="18"
inkscape:window-maximized="1"
inkscape:current-layer="layer1"
showguides="false" /><defs
id="defs2"><mask
id="holes"><rect
x="-60"
y="-60"
width="120"
height="120"
fill="#ffffff"
id="rect3296" /><circle
id="hole"
cy="-40"
r="3"
cx="0" /><use
transform="rotate(72)"
xlink:href="#hole"
id="use3299" /><use
transform="rotate(144)"
xlink:href="#hole"
id="use3301" /><use
transform="rotate(-144)"
xlink:href="#hole"
id="use3303" /><use
transform="rotate(-72)"
xlink:href="#hole"
id="use3305" /></mask></defs><g
inkscape:label="Ebene 1"
inkscape:groupmode="layer"
id="layer1"><rect
style="fill:none;fill-opacity:0.5;stroke:none;stroke-width:0.74;stroke-opacity:1"
id="rect681"
width="666"
height="222"
x="0"
y="0" /><text
xml:space="preserve"
style="font-size:128px;line-height:1.25;font-family:'Open Sans';-inkscape-font-specification:'Open Sans';text-align:center;text-anchor:middle;fill:#000000;fill-opacity:0.7;stroke-width:1"
x="249.9375"
y="134.8125"
id="text3425"><tspan
id="tspan3423"
x="249.9375"
y="134.8125"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:128px;font-family:'Open Sans';-inkscape-font-specification:'Open Sans';text-align:center;text-anchor:middle;fill:#000000;fill-opacity:0.7;stroke-width:1"
sodipodi:role="line">404</tspan></text><text
xml:space="preserve"
style="font-size:26.6667px;line-height:1.25;font-family:'Open Sans';-inkscape-font-specification:'Open Sans';text-align:center;text-anchor:middle"
x="249.04297"
y="194.68582"
id="text4067"><tspan
sodipodi:role="line"
id="tspan4065"
x="249.04295"
y="194.68582"
style="font-size:26.6667px;text-align:center;text-anchor:middle;fill:#000000;fill-opacity:0.7">Return to the web vault?</tspan></text></g></svg>

After

Width:  |  Height:  |  Size: 3.3 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 8.7 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 8.4 KiB

After

Width:  |  Height:  |  Size: 5.2 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 6.5 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 6.5 KiB

View File

@@ -1 +1 @@
1.61.0
1.66.0

View File

@@ -6,14 +6,14 @@ use std::env;
use rocket::serde::json::Json;
use rocket::{
form::Form,
http::{Cookie, CookieJar, SameSite, Status},
request::{self, FlashMessage, FromRequest, Outcome, Request},
response::{content::RawHtml as Html, Flash, Redirect},
Route,
http::{Cookie, CookieJar, MediaType, SameSite, Status},
request::{FromRequest, Outcome, Request},
response::{content::RawHtml as Html, Redirect},
Catcher, Route,
};
use crate::{
api::{ApiResult, EmptyResult, JsonResult, NumberOrString},
api::{core::log_event, ApiResult, EmptyResult, JsonResult, NumberOrString},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder,
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
@@ -25,15 +25,12 @@ use crate::{
CONFIG, VERSION,
};
use futures::{stream, stream::StreamExt};
pub fn routes() -> Vec<Route> {
if !CONFIG.disable_admin_token() && !CONFIG.is_admin_token_set() {
return routes![admin_disabled];
}
routes![
admin_login,
get_users_json,
get_user_json,
post_admin_login,
@@ -59,6 +56,14 @@ pub fn routes() -> Vec<Route> {
]
}
pub fn catchers() -> Vec<Catcher> {
if !CONFIG.disable_admin_token() && !CONFIG.is_admin_token_set() {
catchers![]
} else {
catchers![admin_login]
}
}
static DB_TYPE: Lazy<&str> = Lazy::new(|| {
DbConnType::from_url(&CONFIG.database_url())
.map(|t| match t {
@@ -83,21 +88,12 @@ const DT_FMT: &str = "%Y-%m-%d %H:%M:%S %Z";
const BASE_TEMPLATE: &str = "admin/base";
const ACTING_ADMIN_USER: &str = "vaultwarden-admin-00000-000000000000";
fn admin_path() -> String {
format!("{}{}", CONFIG.domain_path(), ADMIN_PATH)
}
struct Referer(Option<String>);
#[rocket::async_trait]
impl<'r> FromRequest<'r> for Referer {
type Error = ();
async fn from_request(request: &'r Request<'_>) -> request::Outcome<Self, Self::Error> {
Outcome::Success(Referer(request.headers().get_one("Referer").map(str::to_string)))
}
}
#[derive(Debug)]
struct IpHeader(Option<String>);
@@ -120,35 +116,37 @@ impl<'r> FromRequest<'r> for IpHeader {
}
}
/// Used for `Location` response headers, which must specify an absolute URI
/// (see https://tools.ietf.org/html/rfc2616#section-14.30).
fn admin_url(referer: Referer) -> String {
// If we get a referer use that to make it work when, DOMAIN is not set
if let Some(mut referer) = referer.0 {
if let Some(start_index) = referer.find(ADMIN_PATH) {
referer.truncate(start_index + ADMIN_PATH.len());
return referer;
}
}
if CONFIG.domain_set() {
// Don't use CONFIG.domain() directly, since the user may want to keep a
// trailing slash there, particularly when running under a subpath.
format!("{}{}{}", CONFIG.domain_origin(), CONFIG.domain_path(), ADMIN_PATH)
} else {
// Last case, when no referer or domain set, technically invalid but better than nothing
ADMIN_PATH.to_string()
}
fn admin_url() -> String {
format!("{}{}", CONFIG.domain_origin(), admin_path())
}
#[get("/", rank = 2)]
fn admin_login(flash: Option<FlashMessage<'_>>) -> ApiResult<Html<String>> {
#[derive(Responder)]
enum AdminResponse {
#[response(status = 200)]
Ok(ApiResult<Html<String>>),
#[response(status = 401)]
Unauthorized(ApiResult<Html<String>>),
#[response(status = 429)]
TooManyRequests(ApiResult<Html<String>>),
}
#[catch(401)]
fn admin_login(request: &Request<'_>) -> ApiResult<Html<String>> {
if request.format() == Some(&MediaType::JSON) {
err_code!("Authorization failed.", Status::Unauthorized.code);
}
let redirect = request.segments::<std::path::PathBuf>(0..).unwrap_or_default().display().to_string();
render_admin_login(None, Some(redirect))
}
fn render_admin_login(msg: Option<&str>, redirect: Option<String>) -> ApiResult<Html<String>> {
// If there is an error, show it
let msg = flash.map(|msg| format!("{}: {}", msg.kind(), msg.message()));
let msg = msg.map(|msg| format!("Error: {msg}"));
let json = json!({
"page_content": "admin/login",
"version": VERSION,
"error": msg,
"redirect": redirect,
"urlpath": CONFIG.domain_path()
});
@@ -160,25 +158,25 @@ fn admin_login(flash: Option<FlashMessage<'_>>) -> ApiResult<Html<String>> {
#[derive(FromForm)]
struct LoginForm {
token: String,
redirect: Option<String>,
}
#[post("/", data = "<data>")]
fn post_admin_login(
data: Form<LoginForm>,
cookies: &CookieJar<'_>,
ip: ClientIp,
referer: Referer,
) -> Result<Redirect, Flash<Redirect>> {
fn post_admin_login(data: Form<LoginForm>, cookies: &CookieJar<'_>, ip: ClientIp) -> Result<Redirect, AdminResponse> {
let data = data.into_inner();
let redirect = data.redirect;
if crate::ratelimit::check_limit_admin(&ip.ip).is_err() {
return Err(Flash::error(Redirect::to(admin_url(referer)), "Too many requests, try again later."));
return Err(AdminResponse::TooManyRequests(render_admin_login(
Some("Too many requests, try again later."),
redirect,
)));
}
// If the token is invalid, redirect to login page
if !_validate_token(&data.token) {
error!("Invalid admin token. IP: {}", ip.ip);
Err(Flash::error(Redirect::to(admin_url(referer)), "Invalid admin token, please try again."))
Err(AdminResponse::Unauthorized(render_admin_login(Some("Invalid admin token, please try again."), redirect)))
} else {
// If the token received is valid, generate JWT and save it as a cookie
let claims = generate_admin_claims();
@@ -192,7 +190,11 @@ fn post_admin_login(
.finish();
cookies.add(cookie);
Ok(Redirect::to(admin_url(referer)))
if let Some(redirect) = redirect {
Ok(Redirect::to(format!("{}{}", admin_path(), redirect)))
} else {
Err(AdminResponse::Ok(render_admin_page()))
}
}
}
@@ -244,19 +246,23 @@ impl AdminTemplateData {
}
}
#[get("/", rank = 1)]
fn admin_page(_token: AdminToken) -> ApiResult<Html<String>> {
fn render_admin_page() -> ApiResult<Html<String>> {
let text = AdminTemplateData::new().render()?;
Ok(Html(text))
}
#[get("/")]
fn admin_page(_token: AdminToken) -> ApiResult<Html<String>> {
render_admin_page()
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct InviteData {
email: String,
}
async fn get_user_or_404(uuid: &str, conn: &DbConn) -> ApiResult<User> {
async fn get_user_or_404(uuid: &str, conn: &mut DbConn) -> ApiResult<User> {
if let Some(user) = User::find_by_uuid(uuid, conn).await {
Ok(user)
} else {
@@ -265,28 +271,28 @@ async fn get_user_or_404(uuid: &str, conn: &DbConn) -> ApiResult<User> {
}
#[post("/invite", data = "<data>")]
async fn invite_user(data: Json<InviteData>, _token: AdminToken, conn: DbConn) -> JsonResult {
async fn invite_user(data: Json<InviteData>, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let data: InviteData = data.into_inner();
let email = data.email.clone();
if User::find_by_mail(&data.email, &conn).await.is_some() {
if User::find_by_mail(&data.email, &mut conn).await.is_some() {
err_code!("User already exists", Status::Conflict.code)
}
let mut user = User::new(email);
async fn _generate_invite(user: &User, conn: &DbConn) -> EmptyResult {
async fn _generate_invite(user: &User, conn: &mut DbConn) -> EmptyResult {
if CONFIG.mail_enabled() {
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None).await
} else {
let invitation = Invitation::new(user.email.clone());
let invitation = Invitation::new(&user.email);
invitation.save(conn).await
}
}
_generate_invite(&user, &conn).await.map_err(|e| e.with_code(Status::InternalServerError.code))?;
user.save(&conn).await.map_err(|e| e.with_code(Status::InternalServerError.code))?;
_generate_invite(&user, &mut conn).await.map_err(|e| e.with_code(Status::InternalServerError.code))?;
user.save(&mut conn).await.map_err(|e| e.with_code(Status::InternalServerError.code))?;
Ok(Json(user.to_json(&conn).await))
Ok(Json(user.to_json(&mut conn).await))
}
#[post("/test/smtp", data = "<data>")]
@@ -301,99 +307,111 @@ async fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
}
#[get("/logout")]
fn logout(cookies: &CookieJar<'_>, referer: Referer) -> Redirect {
fn logout(cookies: &CookieJar<'_>) -> Redirect {
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
Redirect::to(admin_url(referer))
Redirect::to(admin_path())
}
#[get("/users")]
async fn get_users_json(_token: AdminToken, conn: DbConn) -> Json<Value> {
let users_json = stream::iter(User::get_all(&conn).await)
.then(|u| async {
let u = u; // Move out this single variable
let mut usr = u.to_json(&conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr
})
.collect::<Vec<Value>>()
.await;
async fn get_users_json(_token: AdminToken, mut conn: DbConn) -> Json<Value> {
let mut users_json = Vec::new();
for u in User::get_all(&mut conn).await {
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
users_json.push(usr);
}
Json(Value::Array(users_json))
}
#[get("/users/overview")]
async fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let users_json = stream::iter(User::get_all(&conn).await)
.then(|u| async {
let u = u; // Move out this single variable
let mut usr = u.to_json(&conn).await;
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn).await);
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &conn).await);
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &conn).await as i32));
usr["user_enabled"] = json!(u.enabled);
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["last_active"] = match u.last_active(&conn).await {
Some(dt) => json!(format_naive_datetime_local(&dt, DT_FMT)),
None => json!("Never"),
};
usr
})
.collect::<Vec<Value>>()
.await;
async fn users_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<String>> {
let mut users_json = Vec::new();
for u in User::get_all(&mut conn).await {
let mut usr = u.to_json(&mut conn).await;
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &mut conn).await);
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &mut conn).await);
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &mut conn).await as i32));
usr["user_enabled"] = json!(u.enabled);
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["last_active"] = match u.last_active(&mut conn).await {
Some(dt) => json!(format_naive_datetime_local(&dt, DT_FMT)),
None => json!("Never"),
};
users_json.push(usr);
}
let text = AdminTemplateData::with_data("admin/users", json!(users_json)).render()?;
Ok(Html(text))
}
#[get("/users/<uuid>")]
async fn get_user_json(uuid: String, _token: AdminToken, conn: DbConn) -> JsonResult {
let u = get_user_or_404(&uuid, &conn).await?;
let mut usr = u.to_json(&conn).await;
async fn get_user_json(uuid: String, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let u = get_user_or_404(&uuid, &mut conn).await?;
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
Ok(Json(usr))
}
#[post("/users/<uuid>/delete")]
async fn delete_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let user = get_user_or_404(&uuid, &conn).await?;
user.delete(&conn).await
async fn delete_user(uuid: String, _token: AdminToken, mut conn: DbConn, ip: ClientIp) -> EmptyResult {
let user = get_user_or_404(&uuid, &mut conn).await?;
// Get the user_org records before deleting the actual user
let user_orgs = UserOrganization::find_any_state_by_user(&uuid, &mut conn).await;
let res = user.delete(&mut conn).await;
for user_org in user_orgs {
log_event(
EventType::OrganizationUserRemoved as i32,
&user_org.uuid,
user_org.org_uuid,
String::from(ACTING_ADMIN_USER),
14, // Use UnknownBrowser type
&ip.ip,
&mut conn,
)
.await;
}
res
}
#[post("/users/<uuid>/deauth")]
async fn deauth_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn).await?;
Device::delete_all_by_user(&user.uuid, &conn).await?;
async fn deauth_user(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
user.save(&conn).await
user.save(&mut conn).await
}
#[post("/users/<uuid>/disable")]
async fn disable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn).await?;
Device::delete_all_by_user(&user.uuid, &conn).await?;
async fn disable_user(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
user.enabled = false;
user.save(&conn).await
user.save(&mut conn).await
}
#[post("/users/<uuid>/enable")]
async fn enable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn).await?;
async fn enable_user(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
user.enabled = true;
user.save(&conn).await
user.save(&mut conn).await
}
#[post("/users/<uuid>/remove-2fa")]
async fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &conn).await?;
async fn remove_2fa(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
user.totp_recover = None;
user.save(&conn).await
user.save(&mut conn).await
}
#[derive(Deserialize, Debug)]
@@ -404,13 +422,19 @@ struct UserOrgTypeData {
}
#[post("/users/org_type", data = "<data>")]
async fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, conn: DbConn) -> EmptyResult {
async fn update_user_org_type(
data: Json<UserOrgTypeData>,
_token: AdminToken,
mut conn: DbConn,
ip: ClientIp,
) -> EmptyResult {
let data: UserOrgTypeData = data.into_inner();
let mut user_to_edit = match UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &conn).await {
Some(user) => user,
None => err!("The specified user isn't member of the organization"),
};
let mut user_to_edit =
match UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &mut conn).await {
Some(user) => user,
None => err!("The specified user isn't member of the organization"),
};
let new_type = match UserOrgType::from_str(&data.user_type.into_string()) {
Some(new_type) => new_type as i32,
@@ -418,47 +442,66 @@ async fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, c
};
if user_to_edit.atype == UserOrgType::Owner && new_type != UserOrgType::Owner {
// Removing owner permmission, check that there are at least another owner
let num_owners =
UserOrganization::find_by_org_and_type(&data.org_uuid, UserOrgType::Owner as i32, &conn).await.len();
if num_owners <= 1 {
// Removing owner permmission, check that there is at least one other confirmed owner
if UserOrganization::count_confirmed_by_org_and_type(&data.org_uuid, UserOrgType::Owner, &mut conn).await <= 1 {
err!("Can't change the type of the last owner")
}
}
// This check is also done at api::organizations::{accept_invite(), _confirm_invite, _activate_user(), edit_user()}, update_user_org_type
// It returns different error messages per function.
if new_type < UserOrgType::Admin {
match OrgPolicy::is_user_allowed(&user_to_edit.user_uuid, &user_to_edit.org_uuid, true, &mut conn).await {
Ok(_) => {}
Err(OrgPolicyErr::TwoFactorMissing) => {
err!("You cannot modify this user to this type because it has no two-step login method activated");
}
Err(OrgPolicyErr::SingleOrgEnforced) => {
err!("You cannot modify this user to this type because it is a member of an organization which forbids it");
}
}
}
log_event(
EventType::OrganizationUserUpdated as i32,
&user_to_edit.uuid,
data.org_uuid,
String::from(ACTING_ADMIN_USER),
14, // Use UnknownBrowser type
&ip.ip,
&mut conn,
)
.await;
user_to_edit.atype = new_type;
user_to_edit.save(&conn).await
user_to_edit.save(&mut conn).await
}
#[post("/users/update_revision")]
async fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
User::update_all_revisions(&conn).await
async fn update_revision_users(_token: AdminToken, mut conn: DbConn) -> EmptyResult {
User::update_all_revisions(&mut conn).await
}
#[get("/organizations/overview")]
async fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let organizations_json = stream::iter(Organization::get_all(&conn).await)
.then(|o| async {
let o = o; //Move out this single variable
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn).await);
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &conn).await);
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &conn).await);
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &conn).await as i32));
org
})
.collect::<Vec<Value>>()
.await;
async fn organizations_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<String>> {
let mut organizations_json = Vec::new();
for o in Organization::get_all(&mut conn).await {
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &mut conn).await);
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &mut conn).await);
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &mut conn).await);
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &mut conn).await as i32));
organizations_json.push(org);
}
let text = AdminTemplateData::with_data("admin/organizations", json!(organizations_json)).render()?;
Ok(Html(text))
}
#[post("/organizations/<uuid>/delete")]
async fn delete_organization(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(&uuid, &conn).await.map_res("Organization doesn't exist")?;
org.delete(&conn).await
async fn delete_organization(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(&uuid, &mut conn).await.map_res("Organization doesn't exist")?;
org.delete(&mut conn).await
}
#[derive(Deserialize)]
@@ -498,7 +541,6 @@ use cached::proc_macro::cached;
async fn get_release_info(has_http_access: bool, running_within_docker: bool) -> (String, String, String) {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
if has_http_access {
info!("Running get_release_info!!");
(
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/vaultwarden/releases/latest")
.await
@@ -535,15 +577,15 @@ async fn get_release_info(has_http_access: bool, running_within_docker: bool) ->
}
#[get("/diagnostics")]
async fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResult<Html<String>> {
async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn) -> ApiResult<Html<String>> {
use chrono::prelude::*;
use std::net::ToSocketAddrs;
// Get current running versions
let web_vault_version: WebVaultVersion =
match std::fs::read_to_string(&format!("{}/{}", CONFIG.web_vault_folder(), "vw-version.json")) {
match std::fs::read_to_string(format!("{}/{}", CONFIG.web_vault_folder(), "vw-version.json")) {
Ok(s) => serde_json::from_str(&s)?,
_ => match std::fs::read_to_string(&format!("{}/{}", CONFIG.web_vault_folder(), "version.json")) {
_ => match std::fs::read_to_string(format!("{}/{}", CONFIG.web_vault_folder(), "version.json")) {
Ok(s) => serde_json::from_str(&s)?,
_ => WebVaultVersion {
version: String::from("Version file missing"),
@@ -589,8 +631,8 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> A
"ip_header_config": &CONFIG.ip_header(),
"uses_proxy": uses_proxy,
"db_type": *DB_TYPE,
"db_version": get_sql_server_version(&conn).await,
"admin_url": format!("{}/diagnostics", admin_url(Referer(None))),
"db_version": get_sql_server_version(&mut conn).await,
"admin_url": format!("{}/diagnostics", admin_url()),
"overrides": &CONFIG.get_overrides().join(", "),
"server_time_local": Local::now().format("%Y-%m-%d %H:%M:%S %Z").to_string(),
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the date/time check as the last item to minimize the difference
@@ -618,9 +660,9 @@ fn delete_config(_token: AdminToken) -> EmptyResult {
}
#[post("/config/backup_db")]
async fn backup_db(_token: AdminToken, conn: DbConn) -> EmptyResult {
async fn backup_db(_token: AdminToken, mut conn: DbConn) -> EmptyResult {
if *CAN_BACKUP {
backup_database(&conn).await
backup_database(&mut conn).await
} else {
err!("Can't back up current DB (Only SQLite supports this feature)");
}
@@ -632,15 +674,15 @@ pub struct AdminToken {}
impl<'r> FromRequest<'r> for AdminToken {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> request::Outcome<Self, Self::Error> {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
if CONFIG.disable_admin_token() {
Outcome::Success(AdminToken {})
Outcome::Success(Self {})
} else {
let cookies = request.cookies();
let access_token = match cookies.get(COOKIE_NAME) {
Some(cookie) => cookie.value(),
None => return Outcome::Forward(()), // If there is no cookie, redirect to login
None => return Outcome::Failure((Status::Unauthorized, "Unauthorized")),
};
let ip = match ClientIp::from_request(request).await {
@@ -652,10 +694,10 @@ impl<'r> FromRequest<'r> for AdminToken {
// Remove admin cookie
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
error!("Invalid or expired admin JWT. IP: {}.", ip);
return Outcome::Forward(());
return Outcome::Failure((Status::Unauthorized, "Session expired"));
}
Outcome::Success(AdminToken {})
Outcome::Success(Self {})
}
}
}

View File

@@ -3,8 +3,10 @@ use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
api::{EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType},
auth::{decode_delete, decode_invite, decode_verify_email, Headers},
api::{
core::log_user_event, EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType,
},
auth::{decode_delete, decode_invite, decode_verify_email, ClientIp, Headers},
crypto,
db::{models::*, DbConn},
mail, CONFIG,
@@ -36,12 +38,13 @@ pub fn routes() -> Vec<rocket::Route> {
verify_password,
api_key,
rotate_api_key,
get_known_device,
]
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct RegisterData {
pub struct RegisterData {
Email: String,
Kdf: Option<i32>,
KdfIterations: Option<i32>,
@@ -81,7 +84,11 @@ fn enforce_password_hint_setting(password_hint: &Option<String>) -> EmptyResult
}
#[post("/accounts/register", data = "<data>")]
async fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
async fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, conn).await
}
pub async fn _register(data: JsonUpcase<RegisterData>, mut conn: DbConn) -> JsonResult {
let data: RegisterData = data.into_inner().data;
let email = data.Email.to_lowercase();
@@ -98,33 +105,34 @@ async fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
let password_hint = clean_password_hint(&data.MasterPasswordHint);
enforce_password_hint_setting(&password_hint)?;
let mut user = match User::find_by_mail(&email, &conn).await {
Some(user) => {
let mut verified_by_invite = false;
let mut user = match User::find_by_mail(&email, &mut conn).await {
Some(mut user) => {
if !user.password_hash.is_empty() {
if CONFIG.is_signup_allowed(&email) {
err!("User already exists")
} else {
err!("Registration not allowed or user already exists")
}
err!("Registration not allowed or user already exists")
}
if let Some(token) = data.Token {
let claims = decode_invite(&token)?;
if claims.email == email {
// Verify the email address when signing up via a valid invite token
verified_by_invite = true;
user.verified_at = Some(Utc::now().naive_utc());
user
} else {
err!("Registration email does not match invite email")
}
} else if Invitation::take(&email, &conn).await {
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &conn).await.iter_mut() {
} else if Invitation::take(&email, &mut conn).await {
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &mut conn).await.iter_mut() {
user_org.status = UserOrgStatus::Accepted as i32;
user_org.save(&conn).await?;
user_org.save(&mut conn).await?;
}
user
} else if EmergencyAccess::find_invited_by_grantee_email(&email, &conn).await.is_some() {
} else if CONFIG.is_signup_allowed(&email)
|| EmergencyAccess::find_invited_by_grantee_email(&email, &mut conn).await.is_some()
{
user
} else if CONFIG.is_signup_allowed(&email) {
err!("Account with this email already exists")
} else {
err!("Registration not allowed or user already exists")
}
@@ -133,7 +141,7 @@ async fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
// Order is important here; the invitation check must come first
// because the vaultwarden admin can invite anyone, regardless
// of other signup restrictions.
if Invitation::take(&email, &conn).await || CONFIG.is_signup_allowed(&email) {
if Invitation::take(&email, &mut conn).await || CONFIG.is_signup_allowed(&email) {
User::new(email.clone())
} else {
err!("Registration not allowed or user already exists")
@@ -142,7 +150,7 @@ async fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
};
// Make sure we don't leave a lingering invitation.
Invitation::take(&email, &conn).await;
Invitation::take(&email, &mut conn).await;
if let Some(client_kdf_iter) = data.KdfIterations {
user.client_kdf_iter = client_kdf_iter;
@@ -167,7 +175,7 @@ async fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
}
if CONFIG.mail_enabled() {
if CONFIG.signups_verify() {
if CONFIG.signups_verify() && !verified_by_invite {
if let Err(e) = mail::send_welcome_must_verify(&user.email, &user.uuid).await {
error!("Error sending welcome email: {:#?}", e);
}
@@ -178,20 +186,23 @@ async fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
}
}
user.save(&conn).await
user.save(&mut conn).await?;
Ok(Json(json!({
"Object": "register",
"CaptchaBypassToken": "",
})))
}
#[get("/accounts/profile")]
async fn profile(headers: Headers, conn: DbConn) -> Json<Value> {
Json(headers.user.to_json(&conn).await)
async fn profile(headers: Headers, mut conn: DbConn) -> Json<Value> {
Json(headers.user.to_json(&mut conn).await)
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct ProfileData {
#[serde(rename = "Culture")]
_Culture: String, // Ignored, always use en-US
MasterPasswordHint: Option<String>,
// Culture: String, // Ignored, always use en-US
// MasterPasswordHint: Option<String>, // Ignored, has been moved to ChangePassData
Name: String,
}
@@ -201,7 +212,7 @@ async fn put_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbCo
}
#[post("/accounts/profile", data = "<data>")]
async fn post_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn post_profile(data: JsonUpcase<ProfileData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: ProfileData = data.into_inner().data;
// Check if the length of the username exceeds 50 characters (Same is Upstream Bitwarden)
@@ -212,16 +223,14 @@ async fn post_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbC
let mut user = headers.user;
user.name = data.Name;
user.password_hint = clean_password_hint(&data.MasterPasswordHint);
enforce_password_hint_setting(&user.password_hint)?;
user.save(&conn).await?;
Ok(Json(user.to_json(&conn).await))
user.save(&mut conn).await?;
Ok(Json(user.to_json(&mut conn).await))
}
#[get("/users/<uuid>/public-key")]
async fn get_public_keys(uuid: String, _headers: Headers, conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(&uuid, &conn).await {
async fn get_public_keys(uuid: String, _headers: Headers, mut conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(&uuid, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
};
@@ -234,7 +243,7 @@ async fn get_public_keys(uuid: String, _headers: Headers, conn: DbConn) -> JsonR
}
#[post("/accounts/keys", data = "<data>")]
async fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: KeysData = data.into_inner().data;
let mut user = headers.user;
@@ -242,7 +251,7 @@ async fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, conn: DbConn) -
user.private_key = Some(data.EncryptedPrivateKey);
user.public_key = Some(data.PublicKey);
user.save(&conn).await?;
user.save(&mut conn).await?;
Ok(Json(json!({
"PrivateKey": user.private_key,
@@ -256,11 +265,17 @@ async fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, conn: DbConn) -
struct ChangePassData {
MasterPasswordHash: String,
NewMasterPasswordHash: String,
MasterPasswordHint: Option<String>,
Key: String,
}
#[post("/accounts/password", data = "<data>")]
async fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_password(
data: JsonUpcase<ChangePassData>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
) -> EmptyResult {
let data: ChangePassData = data.into_inner().data;
let mut user = headers.user;
@@ -268,12 +283,17 @@ async fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn:
err!("Invalid password")
}
user.password_hint = clean_password_hint(&data.MasterPasswordHint);
enforce_password_hint_setting(&user.password_hint)?;
log_user_event(EventType::UserChangedPassword as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
user.set_password(
&data.NewMasterPasswordHash,
Some(vec![String::from("post_rotatekey"), String::from("get_contacts"), String::from("get_public_keys")]),
);
user.akey = data.Key;
user.save(&conn).await
user.save(&mut conn).await
}
#[derive(Deserialize)]
@@ -288,7 +308,7 @@ struct ChangeKdfData {
}
#[post("/accounts/kdf", data = "<data>")]
async fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: ChangeKdfData = data.into_inner().data;
let mut user = headers.user;
@@ -300,7 +320,7 @@ async fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, conn: DbCon
user.client_kdf_type = data.Kdf;
user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key;
user.save(&conn).await
user.save(&mut conn).await
}
#[derive(Deserialize)]
@@ -323,7 +343,13 @@ struct KeyData {
}
#[post("/accounts/key", data = "<data>")]
async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
async fn post_rotatekey(
data: JsonUpcase<KeyData>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
nt: Notify<'_>,
) -> EmptyResult {
let data: KeyData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
@@ -334,7 +360,7 @@ async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbCon
// Update folder data
for folder_data in data.Folders {
let mut saved_folder = match Folder::find_by_uuid(&folder_data.Id, &conn).await {
let mut saved_folder = match Folder::find_by_uuid(&folder_data.Id, &mut conn).await {
Some(folder) => folder,
None => err!("Folder doesn't exist"),
};
@@ -344,14 +370,14 @@ async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbCon
}
saved_folder.name = folder_data.Name;
saved_folder.save(&conn).await?
saved_folder.save(&mut conn).await?
}
// Update cipher data
use super::ciphers::update_cipher_from_data;
for cipher_data in data.Ciphers {
let mut saved_cipher = match Cipher::find_by_uuid(cipher_data.Id.as_ref().unwrap(), &conn).await {
let mut saved_cipher = match Cipher::find_by_uuid(cipher_data.Id.as_ref().unwrap(), &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
};
@@ -362,7 +388,8 @@ async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbCon
// Prevent triggering cipher updates via WebSockets by settings UpdateType::None
// The user sessions are invalidated because all the ciphers were re-encrypted and thus triggering an update could cause issues.
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &conn, &nt, UpdateType::None).await?
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &mut conn, &ip, &nt, UpdateType::None)
.await?
}
// Update user data
@@ -372,11 +399,11 @@ async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbCon
user.private_key = Some(data.PrivateKey);
user.reset_security_stamp();
user.save(&conn).await
user.save(&mut conn).await
}
#[post("/accounts/security-stamp", data = "<data>")]
async fn post_sstamp(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_sstamp(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let mut user = headers.user;
@@ -384,9 +411,9 @@ async fn post_sstamp(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbC
err!("Invalid password")
}
Device::delete_all_by_user(&user.uuid, &conn).await?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
user.save(&conn).await
user.save(&mut conn).await
}
#[derive(Deserialize)]
@@ -397,7 +424,7 @@ struct EmailTokenData {
}
#[post("/accounts/email-token", data = "<data>")]
async fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: EmailTokenData = data.into_inner().data;
let mut user = headers.user;
@@ -405,7 +432,7 @@ async fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, co
err!("Invalid password")
}
if User::find_by_mail(&data.NewEmail, &conn).await.is_some() {
if User::find_by_mail(&data.NewEmail, &mut conn).await.is_some() {
err!("Email already in use");
}
@@ -423,7 +450,7 @@ async fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, co
user.email_new = Some(data.NewEmail);
user.email_new_token = Some(token);
user.save(&conn).await
user.save(&mut conn).await
}
#[derive(Deserialize)]
@@ -438,7 +465,7 @@ struct ChangeEmailData {
}
#[post("/accounts/email", data = "<data>")]
async fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: ChangeEmailData = data.into_inner().data;
let mut user = headers.user;
@@ -446,7 +473,7 @@ async fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: D
err!("Invalid password")
}
if User::find_by_mail(&data.NewEmail, &conn).await.is_some() {
if User::find_by_mail(&data.NewEmail, &mut conn).await.is_some() {
err!("Email already in use");
}
@@ -481,7 +508,7 @@ async fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: D
user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key;
user.save(&conn).await
user.save(&mut conn).await
}
#[post("/accounts/verify-email")]
@@ -507,10 +534,10 @@ struct VerifyEmailTokenData {
}
#[post("/accounts/verify-email-token", data = "<data>")]
async fn post_verify_email_token(data: JsonUpcase<VerifyEmailTokenData>, conn: DbConn) -> EmptyResult {
async fn post_verify_email_token(data: JsonUpcase<VerifyEmailTokenData>, mut conn: DbConn) -> EmptyResult {
let data: VerifyEmailTokenData = data.into_inner().data;
let mut user = match User::find_by_uuid(&data.UserId, &conn).await {
let mut user = match User::find_by_uuid(&data.UserId, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
};
@@ -525,7 +552,7 @@ async fn post_verify_email_token(data: JsonUpcase<VerifyEmailTokenData>, conn: D
user.verified_at = Some(Utc::now().naive_utc());
user.last_verifying_at = None;
user.login_verify_count = 0;
if let Err(e) = user.save(&conn).await {
if let Err(e) = user.save(&mut conn).await {
error!("Error saving email verification: {:#?}", e);
}
@@ -539,11 +566,11 @@ struct DeleteRecoverData {
}
#[post("/accounts/delete-recover", data = "<data>")]
async fn post_delete_recover(data: JsonUpcase<DeleteRecoverData>, conn: DbConn) -> EmptyResult {
async fn post_delete_recover(data: JsonUpcase<DeleteRecoverData>, mut conn: DbConn) -> EmptyResult {
let data: DeleteRecoverData = data.into_inner().data;
if CONFIG.mail_enabled() {
if let Some(user) = User::find_by_mail(&data.Email, &conn).await {
if let Some(user) = User::find_by_mail(&data.Email, &mut conn).await {
if let Err(e) = mail::send_delete_account(&user.email, &user.uuid).await {
error!("Error sending delete account email: {:#?}", e);
}
@@ -566,10 +593,10 @@ struct DeleteRecoverTokenData {
}
#[post("/accounts/delete-recover-token", data = "<data>")]
async fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, conn: DbConn) -> EmptyResult {
async fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, mut conn: DbConn) -> EmptyResult {
let data: DeleteRecoverTokenData = data.into_inner().data;
let user = match User::find_by_uuid(&data.UserId, &conn).await {
let user = match User::find_by_uuid(&data.UserId, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
};
@@ -581,7 +608,7 @@ async fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, con
if claims.sub != user.uuid {
err!("Invalid claim");
}
user.delete(&conn).await
user.delete(&mut conn).await
}
#[post("/accounts/delete", data = "<data>")]
@@ -590,7 +617,7 @@ async fn post_delete_account(data: JsonUpcase<PasswordData>, headers: Headers, c
}
#[delete("/accounts", data = "<data>")]
async fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let user = headers.user;
@@ -598,7 +625,7 @@ async fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn:
err!("Invalid password")
}
user.delete(&conn).await
user.delete(&mut conn).await
}
#[get("/accounts/revision-date")]
@@ -614,7 +641,7 @@ struct PasswordHintData {
}
#[post("/accounts/password-hint", data = "<data>")]
async fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResult {
async fn password_hint(data: JsonUpcase<PasswordHintData>, mut conn: DbConn) -> EmptyResult {
if !CONFIG.mail_enabled() && !CONFIG.show_password_hint() {
err!("This server is not configured to provide password hints.");
}
@@ -624,7 +651,7 @@ async fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> Empt
let data: PasswordHintData = data.into_inner().data;
let email = &data.Email;
match User::find_by_mail(email, &conn).await {
match User::find_by_mail(email, &mut conn).await {
None => {
// To prevent user enumeration, act as if the user exists.
if CONFIG.mail_enabled() {
@@ -666,10 +693,10 @@ async fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
_prelogin(data, conn).await
}
pub async fn _prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
pub async fn _prelogin(data: JsonUpcase<PreloginData>, mut conn: DbConn) -> Json<Value> {
let data: PreloginData = data.into_inner().data;
let (kdf_type, kdf_iter) = match User::find_by_mail(&data.Email, &conn).await {
let (kdf_type, kdf_iter) = match User::find_by_mail(&data.Email, &mut conn).await {
Some(user) => (user.client_kdf_type, user.client_kdf_iter),
None => (User::CLIENT_KDF_TYPE_DEFAULT, User::CLIENT_KDF_ITER_DEFAULT),
};
@@ -703,7 +730,7 @@ async fn _api_key(
data: JsonUpcase<SecretVerificationRequest>,
rotate: bool,
headers: Headers,
conn: DbConn,
mut conn: DbConn,
) -> JsonResult {
let data: SecretVerificationRequest = data.into_inner().data;
let mut user = headers.user;
@@ -714,7 +741,7 @@ async fn _api_key(
if rotate || user.api_key.is_none() {
user.api_key = Some(crypto::generate_api_key());
user.save(&conn).await.expect("Error saving API key");
user.save(&mut conn).await.expect("Error saving API key");
}
Ok(Json(json!({
@@ -732,3 +759,16 @@ async fn api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers,
async fn rotate_api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, true, headers, conn).await
}
#[get("/devices/knowndevice/<email>/<uuid>")]
async fn get_known_device(email: String, uuid: String, mut conn: DbConn) -> String {
// This endpoint doesn't have auth header
if let Some(user) = User::find_by_mail(&email, &mut conn).await {
match Device::find_by_uuid_and_user(&uuid, &user.uuid, &mut conn).await {
Some(_) => String::from("true"),
_ => String::from("false"),
}
} else {
String::from("false")
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,6 @@
use chrono::{Duration, Utc};
use rocket::serde::json::Json;
use rocket::Route;
use rocket::{serde::json::Json, Route};
use serde_json::Value;
use std::borrow::Borrow;
use crate::{
api::{
@@ -14,8 +12,6 @@ use crate::{
mail, CONFIG,
};
use futures::{stream, stream::StreamExt};
pub fn routes() -> Vec<Route> {
routes![
get_contacts,
@@ -41,17 +37,14 @@ pub fn routes() -> Vec<Route> {
// region get
#[get("/emergency-access/trusted")]
async fn get_contacts(headers: Headers, conn: DbConn) -> JsonResult {
async fn get_contacts(headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access_list_json =
stream::iter(EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &conn).await)
.then(|e| async {
let e = e; // Move out this single variable
e.to_json_grantee_details(&conn).await
})
.collect::<Vec<Value>>()
.await;
let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &mut conn).await;
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantee_details(&mut conn).await);
}
Ok(Json(json!({
"Data": emergency_access_list_json,
@@ -61,17 +54,14 @@ async fn get_contacts(headers: Headers, conn: DbConn) -> JsonResult {
}
#[get("/emergency-access/granted")]
async fn get_grantees(headers: Headers, conn: DbConn) -> JsonResult {
async fn get_grantees(headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access_list_json =
stream::iter(EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &conn).await)
.then(|e| async {
let e = e; // Move out this single variable
e.to_json_grantor_details(&conn).await
})
.collect::<Vec<Value>>()
.await;
let emergency_access_list = EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &mut conn).await;
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantor_details(&mut conn).await);
}
Ok(Json(json!({
"Data": emergency_access_list_json,
@@ -81,11 +71,11 @@ async fn get_grantees(headers: Headers, conn: DbConn) -> JsonResult {
}
#[get("/emergency-access/<emer_id>")]
async fn get_emergency_access(emer_id: String, conn: DbConn) -> JsonResult {
async fn get_emergency_access(emer_id: String, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&conn).await)),
match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&mut conn).await)),
None => err!("Emergency access not valid."),
}
}
@@ -94,7 +84,7 @@ async fn get_emergency_access(emer_id: String, conn: DbConn) -> JsonResult {
// region put/post
#[derive(Deserialize, Debug)]
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct EmergencyAccessUpdateData {
Type: NumberOrString,
@@ -115,13 +105,13 @@ async fn put_emergency_access(
async fn post_emergency_access(
emer_id: String,
data: JsonUpcase<EmergencyAccessUpdateData>,
conn: DbConn,
mut conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessUpdateData = data.into_inner().data;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emergency_access) => emergency_access,
None => err!("Emergency access not valid."),
};
@@ -135,7 +125,7 @@ async fn post_emergency_access(
emergency_access.wait_time_days = data.WaitTimeDays;
emergency_access.key_encrypted = data.KeyEncrypted;
emergency_access.save(&conn).await?;
emergency_access.save(&mut conn).await?;
Ok(Json(emergency_access.to_json()))
}
@@ -144,12 +134,12 @@ async fn post_emergency_access(
// region delete
#[delete("/emergency-access/<emer_id>")]
async fn delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
async fn delete_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let grantor_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => {
if emer.grantor_uuid != grantor_user.uuid && emer.grantee_uuid != Some(grantor_user.uuid) {
err!("Emergency access not valid.")
@@ -158,7 +148,7 @@ async fn delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn
}
None => err!("Emergency access not valid."),
};
emergency_access.delete(&conn).await?;
emergency_access.delete(&mut conn).await?;
Ok(())
}
@@ -171,7 +161,7 @@ async fn post_delete_emergency_access(emer_id: String, headers: Headers, conn: D
// region invite
#[derive(Deserialize, Debug)]
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct EmergencyAccessInviteData {
Email: String,
@@ -180,7 +170,7 @@ struct EmergencyAccessInviteData {
}
#[post("/emergency-access/invite", data = "<data>")]
async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessInviteData = data.into_inner().data;
@@ -201,10 +191,10 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
err!("You can not set yourself as an emergency contact.")
}
let grantee_user = match User::find_by_mail(&email, &conn).await {
let grantee_user = match User::find_by_mail(&email, &mut conn).await {
None => {
if !CONFIG.invitations_allowed() {
err!(format!("Grantee user does not exist: {}", email))
err!(format!("Grantee user does not exist: {}", &email))
}
if !CONFIG.is_email_domain_allowed(&email) {
@@ -212,12 +202,12 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
}
if !CONFIG.mail_enabled() {
let invitation = Invitation::new(email.clone());
invitation.save(&conn).await?;
let invitation = Invitation::new(&email);
invitation.save(&mut conn).await?;
}
let mut user = User::new(email.clone());
user.save(&conn).await?;
user.save(&mut conn).await?;
user
}
Some(user) => user,
@@ -227,41 +217,34 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
&grantor_user.uuid,
&grantee_user.uuid,
&grantee_user.email,
&conn,
&mut conn,
)
.await
.is_some()
{
err!(format!("Grantee user already invited: {}", email))
err!(format!("Grantee user already invited: {}", &grantee_user.email))
}
let mut new_emergency_access = EmergencyAccess::new(
grantor_user.uuid.clone(),
Some(grantee_user.email.clone()),
emergency_access_status,
new_type,
wait_time_days,
);
new_emergency_access.save(&conn).await?;
let mut new_emergency_access =
EmergencyAccess::new(grantor_user.uuid, grantee_user.email, emergency_access_status, new_type, wait_time_days);
new_emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite(
&grantee_user.email,
&new_emergency_access.email.expect("Grantee email does not exists"),
&grantee_user.uuid,
Some(new_emergency_access.uuid),
Some(grantor_user.name.clone()),
Some(grantor_user.email),
&new_emergency_access.uuid,
&grantor_user.name,
&grantor_user.email,
)
.await?;
} else {
// Automatically mark user as accepted if no email invites
match User::find_by_mail(&email, &conn).await {
Some(user) => {
match accept_invite_process(user.uuid, new_emergency_access.uuid, Some(email), conn.borrow()).await {
Ok(v) => (v),
Err(e) => err!(e.to_string()),
}
}
match User::find_by_mail(&email, &mut conn).await {
Some(user) => match accept_invite_process(user.uuid, &mut new_emergency_access, &email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
},
None => err!("Grantee user not found."),
}
}
@@ -270,10 +253,10 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
}
#[post("/emergency-access/<emer_id>/reinvite")]
async fn resend_invite(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
async fn resend_invite(emer_id: String, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -291,7 +274,7 @@ async fn resend_invite(emer_id: String, headers: Headers, conn: DbConn) -> Empty
None => err!("Email not valid."),
};
let grantee_user = match User::find_by_mail(&email, &conn).await {
let grantee_user = match User::find_by_mail(&email, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
@@ -302,22 +285,20 @@ async fn resend_invite(emer_id: String, headers: Headers, conn: DbConn) -> Empty
mail::send_emergency_access_invite(
&email,
&grantor_user.uuid,
Some(emergency_access.uuid),
Some(grantor_user.name.clone()),
Some(grantor_user.email),
&emergency_access.uuid,
&grantor_user.name,
&grantor_user.email,
)
.await?;
} else {
if Invitation::find_by_mail(&email, &conn).await.is_none() {
let invitation = Invitation::new(email);
invitation.save(&conn).await?;
if Invitation::find_by_mail(&email, &mut conn).await.is_none() {
let invitation = Invitation::new(&email);
invitation.save(&mut conn).await?;
}
// Automatically mark user as accepted if no email invites
match accept_invite_process(grantee_user.uuid, emergency_access.uuid, emergency_access.email, conn.borrow())
.await
{
Ok(v) => (v),
match accept_invite_process(grantee_user.uuid, &mut emergency_access, &email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
}
}
@@ -332,38 +313,49 @@ struct AcceptData {
}
#[post("/emergency-access/<emer_id>/accept", data = "<data>")]
async fn accept_invite(emer_id: String, data: JsonUpcase<AcceptData>, conn: DbConn) -> EmptyResult {
async fn accept_invite(
emer_id: String,
data: JsonUpcase<AcceptData>,
headers: Headers,
mut conn: DbConn,
) -> EmptyResult {
check_emergency_access_allowed()?;
let data: AcceptData = data.into_inner().data;
let token = &data.Token;
let claims = decode_emergency_access_invite(token)?;
let grantee_user = match User::find_by_mail(&claims.email, &conn).await {
// This can happen if the user who received the invite used a different email to signup.
// Since we do not know if this is intented, we error out here and do nothing with the invite.
if claims.email != headers.user.email {
err!("Claim email does not match current users email")
}
let grantee_user = match User::find_by_mail(&claims.email, &mut conn).await {
Some(user) => {
Invitation::take(&claims.email, &conn).await;
Invitation::take(&claims.email, &mut conn).await;
user
}
None => err!("Invited user not found"),
};
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
// get grantor user to send Accepted email
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn).await {
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if (claims.emer_id.is_some() && emer_id == claims.emer_id.unwrap())
&& (claims.grantor_name.is_some() && grantor_user.name == claims.grantor_name.unwrap())
&& (claims.grantor_email.is_some() && grantor_user.email == claims.grantor_email.unwrap())
if emer_id == claims.emer_id
&& grantor_user.name == claims.grantor_name
&& grantor_user.email == claims.grantor_email
{
match accept_invite_process(grantee_user.uuid.clone(), emer_id, Some(grantee_user.email.clone()), &conn).await {
Ok(v) => (v),
match accept_invite_process(grantee_user.uuid, &mut emergency_access, &grantee_user.email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
}
@@ -379,17 +371,11 @@ async fn accept_invite(emer_id: String, data: JsonUpcase<AcceptData>, conn: DbCo
async fn accept_invite_process(
grantee_uuid: String,
emer_id: String,
email: Option<String>,
conn: &DbConn,
emergency_access: &mut EmergencyAccess,
grantee_email: &str,
conn: &mut DbConn,
) -> EmptyResult {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let emer_email = emergency_access.email;
if emer_email.is_none() || emer_email != email {
if emergency_access.email.is_none() || emergency_access.email.as_ref().unwrap() != grantee_email {
err!("User email does not match invite.");
}
@@ -414,7 +400,7 @@ async fn confirm_emergency_access(
emer_id: String,
data: JsonUpcase<ConfirmData>,
headers: Headers,
conn: DbConn,
mut conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
@@ -422,7 +408,7 @@ async fn confirm_emergency_access(
let data: ConfirmData = data.into_inner().data;
let key = data.Key;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -433,13 +419,13 @@ async fn confirm_emergency_access(
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&confirming_user.uuid, &conn).await {
let grantor_user = match User::find_by_uuid(&confirming_user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn).await {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
@@ -448,7 +434,7 @@ async fn confirm_emergency_access(
emergency_access.key_encrypted = Some(key);
emergency_access.email = None;
emergency_access.save(&conn).await?;
emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite_confirmed(&grantee_user.email, &grantor_user.name).await?;
@@ -464,22 +450,22 @@ async fn confirm_emergency_access(
// region access emergency access
#[post("/emergency-access/<emer_id>/initiate")]
async fn initiate_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn initiate_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let initiating_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.status != EmergencyAccessStatus::Confirmed as i32
|| emergency_access.grantee_uuid != Some(initiating_user.uuid.clone())
|| emergency_access.grantee_uuid != Some(initiating_user.uuid)
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn).await {
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
@@ -489,14 +475,14 @@ async fn initiate_emergency_access(emer_id: String, headers: Headers, conn: DbCo
emergency_access.updated_at = now;
emergency_access.recovery_initiated_at = Some(now);
emergency_access.last_notification_at = Some(now);
emergency_access.save(&conn).await?;
emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_initiated(
&grantor_user.email,
&initiating_user.name,
emergency_access.get_type_as_str(),
&emergency_access.wait_time_days.clone().to_string(),
&emergency_access.wait_time_days,
)
.await?;
}
@@ -504,34 +490,33 @@ async fn initiate_emergency_access(emer_id: String, headers: Headers, conn: DbCo
}
#[post("/emergency-access/<emer_id>/approve")]
async fn approve_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn approve_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let approving_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
|| emergency_access.grantor_uuid != approving_user.uuid
|| emergency_access.grantor_uuid != headers.user.uuid
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&approving_user.uuid, &conn).await {
let grantor_user = match User::find_by_uuid(&headers.user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn).await {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
emergency_access.status = EmergencyAccessStatus::RecoveryApproved as i32;
emergency_access.save(&conn).await?;
emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name).await?;
@@ -543,35 +528,34 @@ async fn approve_emergency_access(emer_id: String, headers: Headers, conn: DbCon
}
#[post("/emergency-access/<emer_id>/reject")]
async fn reject_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn reject_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let rejecting_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if (emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
&& emergency_access.status != EmergencyAccessStatus::RecoveryApproved as i32)
|| emergency_access.grantor_uuid != rejecting_user.uuid
|| emergency_access.grantor_uuid != headers.user.uuid
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&rejecting_user.uuid, &conn).await {
let grantor_user = match User::find_by_uuid(&headers.user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn).await {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
emergency_access.save(&conn).await?;
emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_rejected(&grantee_user.email, &grantor_user.name).await?;
@@ -587,31 +571,27 @@ async fn reject_emergency_access(emer_id: String, headers: Headers, conn: DbConn
// region action
#[post("/emergency-access/<emer_id>/view")]
async fn view_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn view_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let requesting_user = headers.user;
let host = headers.host;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::View) {
if !is_valid_request(&emergency_access, headers.user.uuid, EmergencyAccessType::View) {
err!("Emergency access not valid.")
}
let ciphers = Cipher::find_owned_by_user(&emergency_access.grantor_uuid, &conn).await;
let ciphers = Cipher::find_owned_by_user(&emergency_access.grantor_uuid, &mut conn).await;
let cipher_sync_data =
CipherSyncData::new(&emergency_access.grantor_uuid, &ciphers, CipherSyncType::User, &conn).await;
CipherSyncData::new(&emergency_access.grantor_uuid, &ciphers, CipherSyncType::User, &mut conn).await;
let ciphers_json = stream::iter(ciphers)
.then(|c| async {
let c = c; // Move out this single variable
c.to_json(&host, &emergency_access.grantor_uuid, Some(&cipher_sync_data), &conn).await
})
.collect::<Vec<Value>>()
.await;
let mut ciphers_json = Vec::new();
for c in ciphers {
ciphers_json
.push(c.to_json(&headers.host, &emergency_access.grantor_uuid, Some(&cipher_sync_data), &mut conn).await);
}
Ok(Json(json!({
"Ciphers": ciphers_json,
@@ -621,11 +601,11 @@ async fn view_emergency_access(emer_id: String, headers: Headers, conn: DbConn)
}
#[post("/emergency-access/<emer_id>/takeover")]
async fn takeover_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn takeover_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -634,7 +614,7 @@ async fn takeover_emergency_access(emer_id: String, headers: Headers, conn: DbCo
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn).await {
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
@@ -647,7 +627,7 @@ async fn takeover_emergency_access(emer_id: String, headers: Headers, conn: DbCo
})))
}
#[derive(Deserialize, Debug)]
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct EmergencyAccessPasswordData {
NewMasterPasswordHash: String,
@@ -659,7 +639,7 @@ async fn password_emergency_access(
emer_id: String,
data: JsonUpcase<EmergencyAccessPasswordData>,
headers: Headers,
conn: DbConn,
mut conn: DbConn,
) -> EmptyResult {
check_emergency_access_allowed()?;
@@ -668,7 +648,7 @@ async fn password_emergency_access(
let key = data.Key;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -677,7 +657,7 @@ async fn password_emergency_access(
err!("Emergency access not valid.")
}
let mut grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn).await {
let mut grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
@@ -685,15 +665,15 @@ async fn password_emergency_access(
// change grantor_user password
grantor_user.set_password(new_master_password_hash, None);
grantor_user.akey = key;
grantor_user.save(&conn).await?;
grantor_user.save(&mut conn).await?;
// Disable TwoFactor providers since they will otherwise block logins
TwoFactor::delete_all_by_user(&grantor_user.uuid, &conn).await?;
TwoFactor::delete_all_by_user(&grantor_user.uuid, &mut conn).await?;
// Remove grantor from all organisations unless Owner
for user_org in UserOrganization::find_any_state_by_user(&grantor_user.uuid, &conn).await {
for user_org in UserOrganization::find_any_state_by_user(&grantor_user.uuid, &mut conn).await {
if user_org.atype != UserOrgType::Owner as i32 {
user_org.delete(&conn).await?;
user_org.delete(&mut conn).await?;
}
}
Ok(())
@@ -702,9 +682,9 @@ async fn password_emergency_access(
// endregion
#[get("/emergency-access/<emer_id>/policies")]
async fn policies_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn policies_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -713,12 +693,12 @@ async fn policies_emergency_access(emer_id: String, headers: Headers, conn: DbCo
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn).await {
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
let policies = OrgPolicy::find_confirmed_by_user(&grantor_user.uuid, &conn);
let policies = OrgPolicy::find_confirmed_by_user(&grantor_user.uuid, &mut conn);
let policies_json: Vec<Value> = policies.await.iter().map(OrgPolicy::to_json).collect();
Ok(Json(json!({
@@ -751,41 +731,45 @@ pub async fn emergency_request_timeout_job(pool: DbPool) {
return;
}
if let Ok(conn) = pool.get().await {
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn).await;
if let Ok(mut conn) = pool.get().await {
let emergency_access_list = EmergencyAccess::find_all_recoveries_initiated(&mut conn).await;
if emergency_access_list.is_empty() {
debug!("No emergency request timeout to approve");
}
let now = Utc::now().naive_utc();
for mut emer in emergency_access_list {
if emer.recovery_initiated_at.is_some()
&& Utc::now().naive_utc()
>= emer.recovery_initiated_at.unwrap() + Duration::days(i64::from(emer.wait_time_days))
{
emer.status = EmergencyAccessStatus::RecoveryApproved as i32;
emer.save(&conn).await.expect("Cannot save emergency access on job");
// The find_all_recoveries_initiated already checks if the recovery_initiated_at is not null (None)
let recovery_allowed_at =
emer.recovery_initiated_at.unwrap() + Duration::days(i64::from(emer.wait_time_days));
if recovery_allowed_at.le(&now) {
// Only update the access status
// Updating the whole record could cause issues when the emergency_notification_reminder_job is also active
emer.update_access_status_and_save(EmergencyAccessStatus::RecoveryApproved as i32, &now, &mut conn)
.await
.expect("Unable to update emergency access status");
if CONFIG.mail_enabled() {
// get grantor user to send Accepted email
let grantor_user =
User::find_by_uuid(&emer.grantor_uuid, &conn).await.expect("Grantor user not found.");
User::find_by_uuid(&emer.grantor_uuid, &mut conn).await.expect("Grantor user not found");
// get grantee user to send Accepted email
let grantee_user =
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid"), &mut conn)
.await
.expect("Grantee user not found.");
.expect("Grantee user not found");
mail::send_emergency_access_recovery_timed_out(
&grantor_user.email,
&grantee_user.name.clone(),
&grantee_user.name,
emer.get_type_as_str(),
)
.await
.expect("Error on sending email");
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name.clone())
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name)
.await
.expect("Error on sending email");
}
@@ -802,39 +786,48 @@ pub async fn emergency_notification_reminder_job(pool: DbPool) {
return;
}
if let Ok(conn) = pool.get().await {
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn).await;
if let Ok(mut conn) = pool.get().await {
let emergency_access_list = EmergencyAccess::find_all_recoveries_initiated(&mut conn).await;
if emergency_access_list.is_empty() {
debug!("No emergency request reminder notification to send");
}
let now = Utc::now().naive_utc();
for mut emer in emergency_access_list {
if (emer.recovery_initiated_at.is_some()
&& Utc::now().naive_utc()
>= emer.recovery_initiated_at.unwrap() + Duration::days((i64::from(emer.wait_time_days)) - 1))
&& (emer.last_notification_at.is_none()
|| (emer.last_notification_at.is_some()
&& Utc::now().naive_utc() >= emer.last_notification_at.unwrap() + Duration::days(1)))
{
emer.save(&conn).await.expect("Cannot save emergency access on job");
// The find_all_recoveries_initiated already checks if the recovery_initiated_at is not null (None)
// Calculate the day before the recovery will become active
let final_recovery_reminder_at =
emer.recovery_initiated_at.unwrap() + Duration::days(i64::from(emer.wait_time_days - 1));
// Calculate if a day has passed since the previous notification, else no notification has been sent before
let next_recovery_reminder_at = if let Some(last_notification_at) = emer.last_notification_at {
last_notification_at + Duration::days(1)
} else {
now
};
if final_recovery_reminder_at.le(&now) && next_recovery_reminder_at.le(&now) {
// Only update the last notification date
// Updating the whole record could cause issues when the emergency_request_timeout_job is also active
emer.update_last_notification_date_and_save(&now, &mut conn)
.await
.expect("Unable to update emergency access notification date");
if CONFIG.mail_enabled() {
// get grantor user to send Accepted email
let grantor_user =
User::find_by_uuid(&emer.grantor_uuid, &conn).await.expect("Grantor user not found.");
User::find_by_uuid(&emer.grantor_uuid, &mut conn).await.expect("Grantor user not found");
// get grantee user to send Accepted email
let grantee_user =
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid"), &mut conn)
.await
.expect("Grantee user not found.");
.expect("Grantee user not found");
mail::send_emergency_access_recovery_reminder(
&grantor_user.email,
&grantee_user.name.clone(),
&grantee_user.name,
emer.get_type_as_str(),
&emer.wait_time_days.to_string(), // TODO(jjlin): This should be the number of days left.
"1", // This notification is only triggered one day before the activation
)
.await
.expect("Error on sending email");

341
src/api/core/events.rs Normal file
View File

@@ -0,0 +1,341 @@
use std::net::IpAddr;
use chrono::NaiveDateTime;
use rocket::{form::FromForm, serde::json::Json, Route};
use serde_json::Value;
use crate::{
api::{EmptyResult, JsonResult, JsonUpcaseVec},
auth::{AdminHeaders, ClientIp, Headers},
db::{
models::{Cipher, Event, UserOrganization},
DbConn, DbPool,
},
util::parse_date,
CONFIG,
};
/// ###############################################################################################################
/// /api routes
pub fn routes() -> Vec<Route> {
routes![get_org_events, get_cipher_events, get_user_events,]
}
#[derive(FromForm)]
#[allow(non_snake_case)]
struct EventRange {
start: String,
end: String,
#[field(name = "continuationToken")]
continuation_token: Option<String>,
}
// Upstream: https://github.com/bitwarden/server/blob/9ecf69d9cabce732cf2c57976dd9afa5728578fb/src/Api/Controllers/EventsController.cs#LL84C35-L84C41
#[get("/organizations/<org_id>/events?<data..>")]
async fn get_org_events(org_id: String, data: EventRange, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
Vec::with_capacity(0)
} else {
let start_date = parse_date(&data.start);
let end_date = if let Some(before_date) = &data.continuation_token {
parse_date(before_date)
} else {
parse_date(&data.end)
};
Event::find_by_organization_uuid(&org_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
.collect()
};
Ok(Json(json!({
"Data": events_json,
"Object": "list",
"ContinuationToken": get_continuation_token(&events_json),
})))
}
#[get("/ciphers/<cipher_id>/events?<data..>")]
async fn get_cipher_events(cipher_id: String, data: EventRange, headers: Headers, mut conn: DbConn) -> JsonResult {
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
Vec::with_capacity(0)
} else {
let mut events_json = Vec::with_capacity(0);
if UserOrganization::user_has_ge_admin_access_to_cipher(&headers.user.uuid, &cipher_id, &mut conn).await {
let start_date = parse_date(&data.start);
let end_date = if let Some(before_date) = &data.continuation_token {
parse_date(before_date)
} else {
parse_date(&data.end)
};
events_json = Event::find_by_cipher_uuid(&cipher_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
.collect()
}
events_json
};
Ok(Json(json!({
"Data": events_json,
"Object": "list",
"ContinuationToken": get_continuation_token(&events_json),
})))
}
#[get("/organizations/<org_id>/users/<user_org_id>/events?<data..>")]
async fn get_user_events(
org_id: String,
user_org_id: String,
data: EventRange,
_headers: AdminHeaders,
mut conn: DbConn,
) -> JsonResult {
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
Vec::with_capacity(0)
} else {
let start_date = parse_date(&data.start);
let end_date = if let Some(before_date) = &data.continuation_token {
parse_date(before_date)
} else {
parse_date(&data.end)
};
Event::find_by_org_and_user_org(&org_id, &user_org_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
.collect()
};
Ok(Json(json!({
"Data": events_json,
"Object": "list",
"ContinuationToken": get_continuation_token(&events_json),
})))
}
fn get_continuation_token(events_json: &Vec<Value>) -> Option<&str> {
// When the length of the vec equals the max page_size there probably is more data
// When it is less, then all events are loaded.
if events_json.len() as i64 == Event::PAGE_SIZE {
if let Some(last_event) = events_json.last() {
last_event["date"].as_str()
} else {
None
}
} else {
None
}
}
/// ###############################################################################################################
/// /events routes
pub fn main_routes() -> Vec<Route> {
routes![post_events_collect,]
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EventCollection {
// Mandatory
Type: i32,
Date: String,
// Optional
CipherId: Option<String>,
OrganizationId: Option<String>,
}
// Upstream:
// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Events/Controllers/CollectController.cs
// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Services/Implementations/EventService.cs
#[post("/collect", format = "application/json", data = "<data>")]
async fn post_events_collect(
data: JsonUpcaseVec<EventCollection>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
) -> EmptyResult {
if !CONFIG.org_events_enabled() {
return Ok(());
}
for event in data.iter().map(|d| &d.data) {
let event_date = parse_date(&event.Date);
match event.Type {
1000..=1099 => {
_log_user_event(
event.Type,
&headers.user.uuid,
headers.device.atype,
Some(event_date),
&ip.ip,
&mut conn,
)
.await;
}
1600..=1699 => {
if let Some(org_uuid) = &event.OrganizationId {
_log_event(
event.Type,
org_uuid,
String::from(org_uuid),
&headers.user.uuid,
headers.device.atype,
Some(event_date),
&ip.ip,
&mut conn,
)
.await;
}
}
_ => {
if let Some(cipher_uuid) = &event.CipherId {
if let Some(cipher) = Cipher::find_by_uuid(cipher_uuid, &mut conn).await {
if let Some(org_uuid) = cipher.organization_uuid {
_log_event(
event.Type,
cipher_uuid,
org_uuid,
&headers.user.uuid,
headers.device.atype,
Some(event_date),
&ip.ip,
&mut conn,
)
.await;
}
}
}
}
}
}
Ok(())
}
pub async fn log_user_event(event_type: i32, user_uuid: &str, device_type: i32, ip: &IpAddr, conn: &mut DbConn) {
if !CONFIG.org_events_enabled() {
return;
}
_log_user_event(event_type, user_uuid, device_type, None, ip, conn).await;
}
async fn _log_user_event(
event_type: i32,
user_uuid: &str,
device_type: i32,
event_date: Option<NaiveDateTime>,
ip: &IpAddr,
conn: &mut DbConn,
) {
let orgs = UserOrganization::get_org_uuid_by_user(user_uuid, conn).await;
let mut events: Vec<Event> = Vec::with_capacity(orgs.len() + 1); // We need an event per org and one without an org
// Upstream saves the event also without any org_uuid.
let mut event = Event::new(event_type, event_date);
event.user_uuid = Some(String::from(user_uuid));
event.act_user_uuid = Some(String::from(user_uuid));
event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string());
events.push(event);
// For each org a user is a member of store these events per org
for org_uuid in orgs {
let mut event = Event::new(event_type, event_date);
event.user_uuid = Some(String::from(user_uuid));
event.org_uuid = Some(org_uuid);
event.act_user_uuid = Some(String::from(user_uuid));
event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string());
events.push(event);
}
Event::save_user_event(events, conn).await.unwrap_or(());
}
pub async fn log_event(
event_type: i32,
source_uuid: &str,
org_uuid: String,
act_user_uuid: String,
device_type: i32,
ip: &IpAddr,
conn: &mut DbConn,
) {
if !CONFIG.org_events_enabled() {
return;
}
_log_event(event_type, source_uuid, org_uuid, &act_user_uuid, device_type, None, ip, conn).await;
}
#[allow(clippy::too_many_arguments)]
async fn _log_event(
event_type: i32,
source_uuid: &str,
org_uuid: String,
act_user_uuid: &str,
device_type: i32,
event_date: Option<NaiveDateTime>,
ip: &IpAddr,
conn: &mut DbConn,
) {
// Create a new empty event
let mut event = Event::new(event_type, event_date);
match event_type {
// 1000..=1099 Are user events, they need to be logged via log_user_event()
// Collection Events
1100..=1199 => {
event.cipher_uuid = Some(String::from(source_uuid));
}
// Collection Events
1300..=1399 => {
event.collection_uuid = Some(String::from(source_uuid));
}
// Group Events
1400..=1499 => {
event.group_uuid = Some(String::from(source_uuid));
}
// Org User Events
1500..=1599 => {
event.org_user_uuid = Some(String::from(source_uuid));
}
// 1600..=1699 Are organizational events, and they do not need the source_uuid
// Policy Events
1700..=1799 => {
event.policy_uuid = Some(String::from(source_uuid));
}
// Ignore others
_ => {}
}
event.org_uuid = Some(org_uuid);
event.act_user_uuid = Some(String::from(act_user_uuid));
event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string());
event.save(conn).await.unwrap_or(());
}
pub async fn event_cleanup_job(pool: DbPool) {
debug!("Start events cleanup job");
if CONFIG.events_days_retain().is_none() {
debug!("events_days_retain is not configured, abort");
return;
}
if let Ok(mut conn) = pool.get().await {
Event::clean_events(&mut conn).await.ok();
} else {
error!("Failed to get DB connection while trying to cleanup the events table")
}
}

View File

@@ -12,8 +12,8 @@ pub fn routes() -> Vec<rocket::Route> {
}
#[get("/folders")]
async fn get_folders(headers: Headers, conn: DbConn) -> Json<Value> {
let folders = Folder::find_by_user(&headers.user.uuid, &conn).await;
async fn get_folders(headers: Headers, mut conn: DbConn) -> Json<Value> {
let folders = Folder::find_by_user(&headers.user.uuid, &mut conn).await;
let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect();
Json(json!({
@@ -24,8 +24,8 @@ async fn get_folders(headers: Headers, conn: DbConn) -> Json<Value> {
}
#[get("/folders/<uuid>")]
async fn get_folder(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
let folder = match Folder::find_by_uuid(&uuid, &conn).await {
async fn get_folder(uuid: String, headers: Headers, mut conn: DbConn) -> JsonResult {
let folder = match Folder::find_by_uuid(&uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -44,12 +44,12 @@ pub struct FolderData {
}
#[post("/folders", data = "<data>")]
async fn post_folders(data: JsonUpcase<FolderData>, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
async fn post_folders(data: JsonUpcase<FolderData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
let data: FolderData = data.into_inner().data;
let mut folder = Folder::new(headers.user.uuid, data.Name);
folder.save(&conn).await?;
folder.save(&mut conn).await?;
nt.send_folder_update(UpdateType::FolderCreate, &folder).await;
Ok(Json(folder.to_json()))
@@ -71,12 +71,12 @@ async fn put_folder(
uuid: String,
data: JsonUpcase<FolderData>,
headers: Headers,
conn: DbConn,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let data: FolderData = data.into_inner().data;
let mut folder = match Folder::find_by_uuid(&uuid, &conn).await {
let mut folder = match Folder::find_by_uuid(&uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -87,7 +87,7 @@ async fn put_folder(
folder.name = data.Name;
folder.save(&conn).await?;
folder.save(&mut conn).await?;
nt.send_folder_update(UpdateType::FolderUpdate, &folder).await;
Ok(Json(folder.to_json()))
@@ -99,8 +99,8 @@ async fn delete_folder_post(uuid: String, headers: Headers, conn: DbConn, nt: No
}
#[delete("/folders/<uuid>")]
async fn delete_folder(uuid: String, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let folder = match Folder::find_by_uuid(&uuid, &conn).await {
async fn delete_folder(uuid: String, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let folder = match Folder::find_by_uuid(&uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -110,7 +110,7 @@ async fn delete_folder(uuid: String, headers: Headers, conn: DbConn, nt: Notify<
}
// Delete the actual folder entry
folder.delete(&conn).await?;
folder.delete(&mut conn).await?;
nt.send_folder_update(UpdateType::FolderDelete, &folder).await;
Ok(())

View File

@@ -1,6 +1,7 @@
pub mod accounts;
mod ciphers;
mod emergency_access;
mod events;
mod folders;
mod organizations;
mod sends;
@@ -9,6 +10,7 @@ pub mod two_factor;
pub use ciphers::purge_trashed_ciphers;
pub use ciphers::{CipherSyncData, CipherSyncType};
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
pub use events::{event_cleanup_job, log_event, log_user_event};
pub use sends::purge_sends;
pub use two_factor::send_incomplete_2fa_notifications;
@@ -16,12 +18,13 @@ pub fn routes() -> Vec<Route> {
let mut device_token_routes = routes![clear_device_token, put_device_token];
let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains];
let mut hibp_routes = routes![hibp_breach];
let mut meta_routes = routes![alive, now, version];
let mut meta_routes = routes![alive, now, version, config];
let mut routes = Vec::new();
routes.append(&mut accounts::routes());
routes.append(&mut ciphers::routes());
routes.append(&mut emergency_access::routes());
routes.append(&mut events::routes());
routes.append(&mut folders::routes());
routes.append(&mut organizations::routes());
routes.append(&mut two_factor::routes());
@@ -34,10 +37,18 @@ pub fn routes() -> Vec<Route> {
routes
}
pub fn events_routes() -> Vec<Route> {
let mut routes = Vec::new();
routes.append(&mut events::main_routes());
routes
}
//
// Move this somewhere else
//
use rocket::serde::json::Json;
use rocket::Catcher;
use rocket::Route;
use serde_json::Value;
@@ -127,7 +138,7 @@ struct EquivDomainData {
}
#[post("/settings/domains", data = "<data>")]
async fn post_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn post_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EquivDomainData = data.into_inner().data;
let excluded_globals = data.ExcludedGlobalEquivalentDomains.unwrap_or_default();
@@ -139,7 +150,7 @@ async fn post_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, co
user.excluded_globals = to_string(&excluded_globals).unwrap_or_else(|_| "[]".to_string());
user.equivalent_domains = to_string(&equivalent_domains).unwrap_or_else(|_| "[]".to_string());
user.save(&conn).await?;
user.save(&mut conn).await?;
Ok(Json(json!({})))
}
@@ -200,3 +211,38 @@ pub fn now() -> Json<String> {
fn version() -> Json<&'static str> {
Json(crate::VERSION.unwrap_or_default())
}
#[get("/config")]
fn config() -> Json<Value> {
let domain = crate::CONFIG.domain();
Json(json!({
"version": crate::VERSION,
"gitHash": option_env!("GIT_REV"),
"server": {
"name": "Vaultwarden",
"url": "https://github.com/dani-garcia/vaultwarden"
},
"environment": {
"vault": domain,
"api": format!("{domain}/api"),
"identity": format!("{domain}/identity"),
"notifications": format!("{domain}/notifications"),
"sso": "",
},
}))
}
pub fn catchers() -> Vec<Catcher> {
catchers![api_not_found]
}
#[catch(404)]
fn api_not_found() -> Json<Value> {
Json(json!({
"error": {
"code": 404,
"reason": "Not Found",
"description": "The requested resource could not be found."
}
}))
}

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,9 @@ use crate::{
const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer available";
// The max file size allowed by Bitwarden clients and add an extra 5% to avoid issues
const SIZE_525_MB: u64 = 550_502_400;
pub fn routes() -> Vec<rocket::Route> {
routes![
get_sends,
@@ -28,14 +31,16 @@ pub fn routes() -> Vec<rocket::Route> {
put_send,
delete_send,
put_remove_password,
download_send
download_send,
post_send_file_v2,
post_send_file_v2_data
]
}
pub async fn purge_sends(pool: DbPool) {
debug!("Purging sends");
if let Ok(conn) = pool.get().await {
Send::purge(&conn).await;
if let Ok(mut conn) = pool.get().await {
Send::purge(&mut conn).await;
} else {
error!("Failed to get DB connection while purging sends")
}
@@ -58,6 +63,7 @@ struct SendData {
Notes: Option<String>,
Text: Option<Value>,
File: Option<Value>,
FileLength: Option<NumberOrString>,
}
/// Enforces the `Disable Send` policy. A non-owner/admin user belonging to
@@ -68,10 +74,11 @@ struct SendData {
///
/// There is also a Vaultwarden-specific `sends_allowed` config setting that
/// controls this policy globally.
async fn enforce_disable_send_policy(headers: &Headers, conn: &DbConn) -> EmptyResult {
async fn enforce_disable_send_policy(headers: &Headers, conn: &mut DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid;
let policy_type = OrgPolicyType::DisableSend;
if !CONFIG.sends_allowed() || OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn).await {
if !CONFIG.sends_allowed()
|| OrgPolicy::is_applicable_to_user(user_uuid, OrgPolicyType::DisableSend, None, conn).await
{
err!("Due to an Enterprise Policy, you are only able to delete an existing Send.")
}
Ok(())
@@ -83,7 +90,7 @@ async fn enforce_disable_send_policy(headers: &Headers, conn: &DbConn) -> EmptyR
/// but is allowed to remove this option from an existing Send.
///
/// Ref: https://bitwarden.com/help/article/policies/#send-options
async fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &DbConn) -> EmptyResult {
async fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &mut DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid;
let hide_email = data.HideEmail.unwrap_or(false);
if hide_email && OrgPolicy::is_hide_email_disabled(user_uuid, conn).await {
@@ -135,8 +142,8 @@ fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
}
#[get("/sends")]
async fn get_sends(headers: Headers, conn: DbConn) -> Json<Value> {
let sends = Send::find_by_user(&headers.user.uuid, &conn);
async fn get_sends(headers: Headers, mut conn: DbConn) -> Json<Value> {
let sends = Send::find_by_user(&headers.user.uuid, &mut conn);
let sends_json: Vec<Value> = sends.await.iter().map(|s| s.to_json()).collect();
Json(json!({
@@ -147,8 +154,8 @@ async fn get_sends(headers: Headers, conn: DbConn) -> Json<Value> {
}
#[get("/sends/<uuid>")]
async fn get_send(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
let send = match Send::find_by_uuid(&uuid, &conn).await {
async fn get_send(uuid: String, headers: Headers, mut conn: DbConn) -> JsonResult {
let send = match Send::find_by_uuid(&uuid, &mut conn).await {
Some(send) => send,
None => err!("Send not found"),
};
@@ -161,19 +168,19 @@ async fn get_send(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
}
#[post("/sends", data = "<data>")]
async fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &conn).await?;
async fn post_send(data: JsonUpcase<SendData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let data: SendData = data.into_inner().data;
enforce_disable_hide_email_policy(&data, &headers, &conn).await?;
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
if data.Type == SendType::File as i32 {
err!("File sends should use /api/sends/file")
}
let mut send = create_send(data, headers.user.uuid)?;
send.save(&conn).await?;
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&conn).await).await;
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&mut conn).await).await;
Ok(Json(send.to_json()))
}
@@ -184,9 +191,17 @@ struct UploadData<'f> {
data: TempFile<'f>,
}
#[derive(FromForm)]
struct UploadDataV2<'f> {
data: TempFile<'f>,
}
// @deprecated Mar 25 2021: This method has been deprecated in favor of direct uploads (v2).
// This method still exists to support older clients, probably need to remove it sometime.
// Upstream: https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L164-L167
#[post("/sends/file", format = "multipart/form-data", data = "<data>")]
async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &conn).await?;
async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let UploadData {
model,
@@ -194,15 +209,12 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, conn: DbCo
} = data.into_inner();
let model = model.into_inner().data;
enforce_disable_hide_email_policy(&model, &headers, &conn).await?;
// Get the file length and add an extra 5% to avoid issues
const SIZE_525_MB: u64 = 550_502_400;
enforce_disable_hide_email_policy(&model, &headers, &mut conn).await?;
let size_limit = match CONFIG.user_attachment_limit() {
Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &conn).await;
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &mut conn).await;
if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space")
}
@@ -216,6 +228,19 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, conn: DbCo
err!("Send content is not a file");
}
// There is a bug regarding uploading attachments/sends using the Mobile clients
// See: https://github.com/dani-garcia/vaultwarden/issues/2644 && https://github.com/bitwarden/mobile/issues/2018
// This has been fixed via a PR: https://github.com/bitwarden/mobile/pull/2031, but hasn't landed in a new release yet.
// On the vaultwarden side this is temporarily fixed by using a custom multer library
// See: https://github.com/dani-garcia/vaultwarden/pull/2675
// In any case we will match TempFile::File and not TempFile::Buffered, since Buffered will alter the contents.
if let TempFile::Buffered {
content: _,
} = &data
{
err!("Error reading send file data. Please try an other client.");
}
let size = data.len();
if size > size_limit {
err!("Attachment storage limit exceeded with this file");
@@ -239,12 +264,111 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, conn: DbCo
send.data = serde_json::to_string(&data_value)?;
// Save the changes in the database
send.save(&conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn).await).await;
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&mut conn).await).await;
Ok(Json(send.to_json()))
}
// Upstream: https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L190
#[post("/sends/file/v2", data = "<data>")]
async fn post_send_file_v2(data: JsonUpcase<SendData>, headers: Headers, mut conn: DbConn) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let data = data.into_inner().data;
if data.Type != SendType::File as i32 {
err!("Send content is not a file");
}
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let file_length = match &data.FileLength {
Some(m) => Some(m.into_i32()?),
_ => None,
};
let size_limit = match CONFIG.user_attachment_limit() {
Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &mut conn).await;
if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space")
}
std::cmp::Ord::max(left as u64, SIZE_525_MB)
}
None => SIZE_525_MB,
};
if file_length.is_some() && file_length.unwrap() as u64 > size_limit {
err!("Attachment storage limit exceeded with this file");
}
let mut send = create_send(data, headers.user.uuid)?;
let file_id = crate::crypto::generate_send_id();
let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id.clone()));
o.insert(String::from("Size"), Value::Number(file_length.unwrap().into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(file_length.unwrap())));
}
send.data = serde_json::to_string(&data_value)?;
send.save(&mut conn).await?;
Ok(Json(json!({
"fileUploadType": 0, // 0 == Direct | 1 == Azure
"object": "send-fileUpload",
"url": format!("/sends/{}/file/{}", send.uuid, file_id),
"sendResponse": send.to_json()
})))
}
// https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L243
#[post("/sends/<send_uuid>/file/<file_id>", format = "multipart/form-data", data = "<data>")]
async fn post_send_file_v2_data(
send_uuid: String,
file_id: String,
data: Form<UploadDataV2<'_>>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let mut data = data.into_inner();
// There is a bug regarding uploading attachments/sends using the Mobile clients
// See: https://github.com/dani-garcia/vaultwarden/issues/2644 && https://github.com/bitwarden/mobile/issues/2018
// This has been fixed via a PR: https://github.com/bitwarden/mobile/pull/2031, but hasn't landed in a new release yet.
// On the vaultwarden side this is temporarily fixed by using a custom multer library
// See: https://github.com/dani-garcia/vaultwarden/pull/2675
// In any case we will match TempFile::File and not TempFile::Buffered, since Buffered will alter the contents.
if let TempFile::Buffered {
content: _,
} = &data.data
{
err!("Error reading attachment data. Please try an other client.");
}
if let Some(send) = Send::find_by_uuid(&send_uuid, &mut conn).await {
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send_uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.data.persist_to(&file_path).await {
data.data.move_copy_to(file_path).await?
}
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&mut conn).await).await;
} else {
err!("Send not found. Unable to save the file.");
}
Ok(())
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
pub struct SendAccessData {
@@ -252,8 +376,13 @@ pub struct SendAccessData {
}
#[post("/sends/access/<access_id>", data = "<data>")]
async fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn: DbConn, ip: ClientIp) -> JsonResult {
let mut send = match Send::find_by_access_id(&access_id, &conn).await {
async fn post_access(
access_id: String,
data: JsonUpcase<SendAccessData>,
mut conn: DbConn,
ip: ClientIp,
) -> JsonResult {
let mut send = match Send::find_by_access_id(&access_id, &mut conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
};
@@ -291,9 +420,9 @@ async fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn:
send.access_count += 1;
}
send.save(&conn).await?;
send.save(&mut conn).await?;
Ok(Json(send.to_json_access(&conn).await))
Ok(Json(send.to_json_access(&mut conn).await))
}
#[post("/sends/<send_id>/access/file/<file_id>", data = "<data>")]
@@ -302,9 +431,9 @@ async fn post_access_file(
file_id: String,
data: JsonUpcase<SendAccessData>,
host: Host,
conn: DbConn,
mut conn: DbConn,
) -> JsonResult {
let mut send = match Send::find_by_uuid(&send_id, &conn).await {
let mut send = match Send::find_by_uuid(&send_id, &mut conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
};
@@ -339,7 +468,7 @@ async fn post_access_file(
send.access_count += 1;
send.save(&conn).await?;
send.save(&mut conn).await?;
let token_claims = crate::auth::generate_send_claims(&send_id, &file_id);
let token = crate::auth::encode_jwt(&token_claims);
@@ -365,15 +494,15 @@ async fn put_send(
id: String,
data: JsonUpcase<SendData>,
headers: Headers,
conn: DbConn,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
enforce_disable_send_policy(&headers, &conn).await?;
enforce_disable_send_policy(&headers, &mut conn).await?;
let data: SendData = data.into_inner().data;
enforce_disable_hide_email_policy(&data, &headers, &conn).await?;
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let mut send = match Send::find_by_uuid(&id, &conn).await {
let mut send = match Send::find_by_uuid(&id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -420,15 +549,15 @@ async fn put_send(
send.set_password(Some(&password));
}
send.save(&conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn).await).await;
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&mut conn).await).await;
Ok(Json(send.to_json()))
}
#[delete("/sends/<id>")]
async fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let send = match Send::find_by_uuid(&id, &conn).await {
async fn delete_send(id: String, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let send = match Send::find_by_uuid(&id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -437,17 +566,17 @@ async fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify<'_>)
err!("Send is not owned by user")
}
send.delete(&conn).await?;
nt.send_send_update(UpdateType::SyncSendDelete, &send, &send.update_users_revision(&conn).await).await;
send.delete(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendDelete, &send, &send.update_users_revision(&mut conn).await).await;
Ok(())
}
#[put("/sends/<id>/remove-password")]
async fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &conn).await?;
async fn put_remove_password(id: String, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let mut send = match Send::find_by_uuid(&id, &conn).await {
let mut send = match Send::find_by_uuid(&id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -457,8 +586,8 @@ async fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Not
}
send.set_password(None);
send.save(&conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn).await).await;
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&mut conn).await).await;
Ok(Json(send.to_json()))
}

View File

@@ -4,12 +4,13 @@ use rocket::Route;
use crate::{
api::{
core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData,
core::log_user_event, core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase,
NumberOrString, PasswordData,
},
auth::{ClientIp, Headers},
crypto,
db::{
models::{TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
},
};
@@ -21,7 +22,7 @@ pub fn routes() -> Vec<Route> {
}
#[post("/two-factor/get-authenticator", data = "<data>")]
async fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
let user = headers.user;
@@ -30,11 +31,11 @@ async fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers
}
let type_ = TwoFactorType::Authenticator as i32;
let twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).await;
let twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await;
let (enabled, key) = match twofactor {
Some(tf) => (true, tf.data),
_ => (false, BASE32.encode(&crypto::get_random(vec![0u8; 20]))),
_ => (false, crypto::encode_random_bytes::<20>(BASE32)),
};
Ok(Json(json!({
@@ -57,7 +58,7 @@ async fn activate_authenticator(
data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
ip: ClientIp,
conn: DbConn,
mut conn: DbConn,
) -> JsonResult {
let data: EnableAuthenticatorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
@@ -81,9 +82,11 @@ async fn activate_authenticator(
}
// Validate the token provided with the key, and save new twofactor
validate_totp_code(&user.uuid, &token, &key.to_uppercase(), &ip, &conn).await?;
validate_totp_code(&user.uuid, &token, &key.to_uppercase(), &ip, &mut conn).await?;
_generate_recover_code(&mut user, &conn).await;
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
Ok(Json(json!({
"Enabled": true,
@@ -107,7 +110,7 @@ pub async fn validate_totp_code_str(
totp_code: &str,
secret: &str,
ip: &ClientIp,
conn: &DbConn,
conn: &mut DbConn,
) -> EmptyResult {
if !totp_code.chars().all(char::is_numeric) {
err!("TOTP code is not a number");
@@ -121,7 +124,7 @@ pub async fn validate_totp_code(
totp_code: &str,
secret: &str,
ip: &ClientIp,
conn: &DbConn,
conn: &mut DbConn,
) -> EmptyResult {
use totp_lite::{totp_custom, Sha1};
@@ -167,10 +170,20 @@ pub async fn validate_totp_code(
return Ok(());
} else if generated == totp_code && time_step <= i64::from(twofactor.last_used) {
warn!("This TOTP or a TOTP code within {} steps back or forward has already been used!", steps);
err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
err!(
format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip),
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
);
}
}
// Else no valide code received, deny access
err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
err!(
format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip),
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
);
}

View File

@@ -4,11 +4,14 @@ use rocket::serde::json::Json;
use rocket::Route;
use crate::{
api::{core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase, PasswordData},
auth::Headers,
api::{
core::log_user_event, core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase,
PasswordData,
},
auth::{ClientIp, Headers},
crypto,
db::{
models::{TwoFactor, TwoFactorType, User},
models::{EventType, TwoFactor, TwoFactorType, User},
DbConn,
},
error::MapResult,
@@ -89,14 +92,14 @@ impl DuoStatus {
const DISABLED_MESSAGE_DEFAULT: &str = "<To use the global Duo keys, please leave these fields untouched>";
#[post("/two-factor/get-duo", data = "<data>")]
async fn get_duo(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn get_duo(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let data = get_user_duo_data(&headers.user.uuid, &conn).await;
let data = get_user_duo_data(&headers.user.uuid, &mut conn).await;
let (enabled, data) = match data {
DuoStatus::Global(_) => (true, Some(DuoData::secret())),
@@ -152,7 +155,7 @@ fn check_duo_fields_custom(data: &EnableDuoData) -> bool {
}
#[post("/two-factor/duo", data = "<data>")]
async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, mut conn: DbConn, ip: ClientIp) -> JsonResult {
let data: EnableDuoData = data.into_inner().data;
let mut user = headers.user;
@@ -171,9 +174,11 @@ async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: D
let type_ = TwoFactorType::Duo;
let twofactor = TwoFactor::new(user.uuid.clone(), type_, data_str);
twofactor.save(&conn).await?;
twofactor.save(&mut conn).await?;
_generate_recover_code(&mut user, &conn).await;
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
Ok(Json(json!({
"Enabled": true,
@@ -185,8 +190,8 @@ async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: D
}
#[put("/two-factor/duo", data = "<data>")]
async fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_duo(data, headers, conn).await
async fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn, ip: ClientIp) -> JsonResult {
activate_duo(data, headers, conn, ip).await
}
async fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> EmptyResult {
@@ -223,7 +228,7 @@ const AUTH_PREFIX: &str = "AUTH";
const DUO_PREFIX: &str = "TX";
const APP_PREFIX: &str = "APP";
async fn get_user_duo_data(uuid: &str, conn: &DbConn) -> DuoStatus {
async fn get_user_duo_data(uuid: &str, conn: &mut DbConn) -> DuoStatus {
let type_ = TwoFactorType::Duo as i32;
// If the user doesn't have an entry, disabled
@@ -247,7 +252,7 @@ async fn get_user_duo_data(uuid: &str, conn: &DbConn) -> DuoStatus {
}
// let (ik, sk, ak, host) = get_duo_keys();
async fn get_duo_keys_email(email: &str, conn: &DbConn) -> ApiResult<(String, String, String, String)> {
async fn get_duo_keys_email(email: &str, conn: &mut DbConn) -> ApiResult<(String, String, String, String)> {
let data = match User::find_by_mail(email, conn).await {
Some(u) => get_user_duo_data(&u.uuid, conn).await.data(),
_ => DuoData::global(),
@@ -257,7 +262,7 @@ async fn get_duo_keys_email(email: &str, conn: &DbConn) -> ApiResult<(String, St
Ok((data.ik, data.sk, CONFIG.get_duo_akey(), data.host))
}
pub async fn generate_duo_signature(email: &str, conn: &DbConn) -> ApiResult<(String, String)> {
pub async fn generate_duo_signature(email: &str, conn: &mut DbConn) -> ApiResult<(String, String)> {
let now = Utc::now().timestamp();
let (ik, sk, ak, host) = get_duo_keys_email(email, conn).await?;
@@ -275,14 +280,19 @@ fn sign_duo_values(key: &str, email: &str, ikey: &str, prefix: &str, expire: i64
format!("{}|{}", cookie, crypto::hmac_sign(key, &cookie))
}
pub async fn validate_duo_login(email: &str, response: &str, conn: &DbConn) -> EmptyResult {
pub async fn validate_duo_login(email: &str, response: &str, conn: &mut DbConn) -> EmptyResult {
// email is as entered by the user, so it needs to be normalized before
// comparison with auth_user below.
let email = &email.to_lowercase();
let split: Vec<&str> = response.split(':').collect();
if split.len() != 2 {
err!("Invalid response length");
err!(
"Invalid response length",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
);
}
let auth_sig = split[0];
@@ -296,7 +306,12 @@ pub async fn validate_duo_login(email: &str, response: &str, conn: &DbConn) -> E
let app_user = parse_duo_values(&ak, app_sig, &ik, APP_PREFIX, now)?;
if !crypto::ct_eq(&auth_user, app_user) || !crypto::ct_eq(&auth_user, email) {
err!("Error validating duo authentication")
err!(
"Error validating duo authentication",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}
Ok(())

View File

@@ -3,11 +3,14 @@ use rocket::serde::json::Json;
use rocket::Route;
use crate::{
api::{core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, PasswordData},
auth::Headers,
api::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordData,
},
auth::{ClientIp, Headers},
crypto,
db::{
models::{TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
},
error::{Error, MapResult},
@@ -28,13 +31,13 @@ struct SendEmailLoginData {
/// User is trying to login and wants to use email 2FA.
/// Does not require Bearer token
#[post("/two-factor/send-email-login", data = "<data>")] // JsonResult
async fn send_email_login(data: JsonUpcase<SendEmailLoginData>, conn: DbConn) -> EmptyResult {
async fn send_email_login(data: JsonUpcase<SendEmailLoginData>, mut conn: DbConn) -> EmptyResult {
let data: SendEmailLoginData = data.into_inner().data;
use crate::db::models::User;
// Get the user
let user = match User::find_by_mail(&data.Email, &conn).await {
let user = match User::find_by_mail(&data.Email, &mut conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again."),
};
@@ -48,13 +51,13 @@ async fn send_email_login(data: JsonUpcase<SendEmailLoginData>, conn: DbConn) ->
err!("Email 2FA is disabled")
}
send_token(&user.uuid, &conn).await?;
send_token(&user.uuid, &mut conn).await?;
Ok(())
}
/// Generate the token, save the data for later verification and send email to user
pub async fn send_token(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn send_token(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
let type_ = TwoFactorType::Email as i32;
let mut twofactor =
TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await.map_res("Two factor not found")?;
@@ -73,7 +76,7 @@ pub async fn send_token(user_uuid: &str, conn: &DbConn) -> EmptyResult {
/// When user clicks on Manage email 2FA show the user the related information
#[post("/two-factor/get-email", data = "<data>")]
async fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
let user = headers.user;
@@ -82,7 +85,7 @@ async fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbCon
}
let (enabled, mfa_email) =
match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &conn).await {
match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &mut conn).await {
Some(x) => {
let twofactor_data = EmailTokenData::from_json(&x.data)?;
(true, json!(twofactor_data.email))
@@ -107,7 +110,7 @@ struct SendEmailData {
/// Send a verification email to the specified email address to check whether it exists/belongs to user.
#[post("/two-factor/send-email", data = "<data>")]
async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: SendEmailData = data.into_inner().data;
let user = headers.user;
@@ -121,8 +124,8 @@ async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, conn: DbC
let type_ = TwoFactorType::Email as i32;
if let Some(tf) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).await {
tf.delete(&conn).await?;
if let Some(tf) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await {
tf.delete(&mut conn).await?;
}
let generated_token = crypto::generate_email_token(CONFIG.email_token_size());
@@ -130,7 +133,7 @@ async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, conn: DbC
// Uses EmailVerificationChallenge as type to show that it's not verified yet.
let twofactor = TwoFactor::new(user.uuid, TwoFactorType::EmailVerificationChallenge, twofactor_data.to_json());
twofactor.save(&conn).await?;
twofactor.save(&mut conn).await?;
mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?).await?;
@@ -147,7 +150,7 @@ struct EmailData {
/// Verify email belongs to user and can be used for 2FA email codes.
#[put("/two-factor/email", data = "<data>")]
async fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn email(data: JsonUpcase<EmailData>, headers: Headers, mut conn: DbConn, ip: ClientIp) -> JsonResult {
let data: EmailData = data.into_inner().data;
let mut user = headers.user;
@@ -157,7 +160,7 @@ async fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> J
let type_ = TwoFactorType::EmailVerificationChallenge as i32;
let mut twofactor =
TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).await.map_res("Two factor not found")?;
TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await.map_res("Two factor not found")?;
let mut email_data = EmailTokenData::from_json(&twofactor.data)?;
@@ -173,9 +176,11 @@ async fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> J
email_data.reset_token();
twofactor.atype = TwoFactorType::Email as i32;
twofactor.data = email_data.to_json();
twofactor.save(&conn).await?;
twofactor.save(&mut conn).await?;
_generate_recover_code(&mut user, &conn).await;
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
Ok(Json(json!({
"Email": email_data.email,
@@ -185,14 +190,19 @@ async fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> J
}
/// Validate the email code when used as TwoFactor token mechanism
pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &DbConn) -> EmptyResult {
pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &mut DbConn) -> EmptyResult {
let mut email_data = EmailTokenData::from_json(data)?;
let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Email as i32, conn)
.await
.map_res("Two factor not found")?;
let issued_token = match &email_data.last_token {
Some(t) => t,
_ => err!("No token available"),
_ => err!(
"No token available",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
),
};
if !crypto::ct_eq(issued_token, token) {
@@ -203,21 +213,32 @@ pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, c
twofactor.data = email_data.to_json();
twofactor.save(conn).await?;
err!("Token is invalid")
err!(
"Token is invalid",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}
email_data.reset_token();
twofactor.data = email_data.to_json();
twofactor.save(conn).await?;
let date = NaiveDateTime::from_timestamp(email_data.token_sent, 0);
let date = NaiveDateTime::from_timestamp_opt(email_data.token_sent, 0).expect("Email token timestamp invalid.");
let max_time = CONFIG.email_expiration_time() as i64;
if date + Duration::seconds(max_time) < Utc::now().naive_utc() {
err!("Token has expired")
err!(
"Token has expired",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}
Ok(())
}
/// Data stored in the TwoFactor table in the db
#[derive(Serialize, Deserialize)]
pub struct EmailTokenData {

View File

@@ -5,8 +5,8 @@ use rocket::Route;
use serde_json::Value;
use crate::{
api::{JsonResult, JsonUpcase, NumberOrString, PasswordData},
auth::Headers,
api::{core::log_user_event, JsonResult, JsonUpcase, NumberOrString, PasswordData},
auth::{ClientHeaders, ClientIp, Headers},
crypto,
db::{models::*, DbConn, DbPool},
mail, CONFIG,
@@ -19,7 +19,14 @@ pub mod webauthn;
pub mod yubikey;
pub fn routes() -> Vec<Route> {
let mut routes = routes![get_twofactor, get_recover, recover, disable_twofactor, disable_twofactor_put,];
let mut routes = routes![
get_twofactor,
get_recover,
recover,
disable_twofactor,
disable_twofactor_put,
get_device_verification_settings,
];
routes.append(&mut authenticator::routes());
routes.append(&mut duo::routes());
@@ -31,8 +38,8 @@ pub fn routes() -> Vec<Route> {
}
#[get("/two-factor")]
async fn get_twofactor(headers: Headers, conn: DbConn) -> Json<Value> {
let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &conn).await;
async fn get_twofactor(headers: Headers, mut conn: DbConn) -> Json<Value> {
let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &mut conn).await;
let twofactors_json: Vec<Value> = twofactors.iter().map(TwoFactor::to_json_provider).collect();
Json(json!({
@@ -66,13 +73,18 @@ struct RecoverTwoFactor {
}
#[post("/two-factor/recover", data = "<data>")]
async fn recover(data: JsonUpcase<RecoverTwoFactor>, conn: DbConn) -> JsonResult {
async fn recover(
data: JsonUpcase<RecoverTwoFactor>,
client_headers: ClientHeaders,
mut conn: DbConn,
ip: ClientIp,
) -> JsonResult {
let data: RecoverTwoFactor = data.into_inner().data;
use crate::db::models::User;
// Get the user
let mut user = match User::find_by_mail(&data.Email, &conn).await {
let mut user = match User::find_by_mail(&data.Email, &mut conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again."),
};
@@ -88,17 +100,19 @@ async fn recover(data: JsonUpcase<RecoverTwoFactor>, conn: DbConn) -> JsonResult
}
// Remove all twofactors from the user
TwoFactor::delete_all_by_user(&user.uuid, &conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
log_user_event(EventType::UserRecovered2fa as i32, &user.uuid, client_headers.device_type, &ip.ip, &mut conn).await;
// Remove the recovery code, not needed without twofactors
user.totp_recover = None;
user.save(&conn).await?;
user.save(&mut conn).await?;
Ok(Json(json!({})))
}
async fn _generate_recover_code(user: &mut User, conn: &DbConn) {
async fn _generate_recover_code(user: &mut User, conn: &mut DbConn) {
if user.totp_recover.is_none() {
let totp_recover = BASE32.encode(&crypto::get_random(vec![0u8; 20]));
let totp_recover = crypto::encode_random_bytes::<20>(BASE32);
user.totp_recover = Some(totp_recover);
user.save(conn).await.ok();
}
@@ -112,7 +126,12 @@ struct DisableTwoFactorData {
}
#[post("/two-factor/disable", data = "<data>")]
async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn disable_twofactor(
data: JsonUpcase<DisableTwoFactorData>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
) -> JsonResult {
let data: DisableTwoFactorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let user = headers.user;
@@ -123,24 +142,25 @@ async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Head
let type_ = data.Type.into_i32()?;
if let Some(twofactor) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).await {
twofactor.delete(&conn).await?;
if let Some(twofactor) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await {
twofactor.delete(&mut conn).await?;
log_user_event(EventType::UserDisabled2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
}
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &conn).await.is_empty();
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &mut conn).await.is_empty();
if twofactor_disabled {
for user_org in
UserOrganization::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, &conn)
UserOrganization::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, &mut conn)
.await
.into_iter()
{
if user_org.atype < UserOrgType::Admin {
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&user_org.org_uuid, &conn).await.unwrap();
let org = Organization::find_by_uuid(&user_org.org_uuid, &mut conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
}
user_org.delete(&conn).await?;
user_org.delete(&mut conn).await?;
}
}
}
@@ -153,8 +173,13 @@ async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Head
}
#[put("/two-factor/disable", data = "<data>")]
async fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
disable_twofactor(data, headers, conn).await
async fn disable_twofactor_put(
data: JsonUpcase<DisableTwoFactorData>,
headers: Headers,
conn: DbConn,
ip: ClientIp,
) -> JsonResult {
disable_twofactor(data, headers, conn, ip).await
}
pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
@@ -164,7 +189,7 @@ pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
return;
}
let conn = match pool.get().await {
let mut conn = match pool.get().await {
Ok(conn) => conn,
_ => {
error!("Failed to get DB connection in send_incomplete_2fa_notifications()");
@@ -175,9 +200,9 @@ pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
let now = Utc::now().naive_utc();
let time_limit = Duration::minutes(CONFIG.incomplete_2fa_time_limit());
let time_before = now - time_limit;
let incomplete_logins = TwoFactorIncomplete::find_logins_before(&time_before, &conn).await;
let incomplete_logins = TwoFactorIncomplete::find_logins_before(&time_before, &mut conn).await;
for login in incomplete_logins {
let user = User::find_by_uuid(&login.user_uuid, &conn).await.expect("User not found");
let user = User::find_by_uuid(&login.user_uuid, &mut conn).await.expect("User not found");
info!(
"User {} did not complete a 2FA login within the configured time limit. IP: {}",
user.email, login.ip_address
@@ -185,6 +210,24 @@ pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
mail::send_incomplete_2fa_login(&user.email, &login.ip_address, &login.login_time, &login.device_name)
.await
.expect("Error sending incomplete 2FA email");
login.delete(&conn).await.expect("Error deleting incomplete 2FA record");
login.delete(&mut conn).await.expect("Error deleting incomplete 2FA record");
}
}
// This function currently is just a dummy and the actual part is not implemented yet.
// This also prevents 404 errors.
//
// See the following Bitwarden PR's regarding this feature.
// https://github.com/bitwarden/clients/pull/2843
// https://github.com/bitwarden/clients/pull/2839
// https://github.com/bitwarden/server/pull/2016
//
// The HTML part is hidden via the CSS patches done via the bw_web_build repo
#[get("/two-factor/get-device-verification-settings")]
fn get_device_verification_settings(_headers: Headers, _conn: DbConn) -> Json<Value> {
Json(json!({
"isDeviceVerificationSectionEnabled":false,
"unknownDeviceVerificationEnabled":false,
"object":"deviceVerificationSettings"
}))
}

View File

@@ -6,11 +6,12 @@ use webauthn_rs::{base64_data::Base64UrlSafeData, proto::*, AuthenticationState,
use crate::{
api::{
core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData,
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData,
},
auth::Headers,
auth::{ClientIp, Headers},
db::{
models::{TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
},
error::Error,
@@ -102,7 +103,7 @@ impl WebauthnRegistration {
}
#[post("/two-factor/get-webauthn", data = "<data>")]
async fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
if !CONFIG.domain_set() {
err!("`DOMAIN` environment variable is not set. Webauthn disabled")
}
@@ -111,7 +112,7 @@ async fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: Db
err!("Invalid password");
}
let (enabled, registrations) = get_webauthn_registrations(&headers.user.uuid, &conn).await?;
let (enabled, registrations) = get_webauthn_registrations(&headers.user.uuid, &mut conn).await?;
let registrations_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
@@ -122,12 +123,12 @@ async fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: Db
}
#[post("/two-factor/get-webauthn-challenge", data = "<data>")]
async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
let registrations = get_webauthn_registrations(&headers.user.uuid, &conn)
let registrations = get_webauthn_registrations(&headers.user.uuid, &mut conn)
.await?
.1
.into_iter()
@@ -144,7 +145,7 @@ async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: He
)?;
let type_ = TwoFactorType::WebauthnRegisterChallenge;
TwoFactor::new(headers.user.uuid, type_, serde_json::to_string(&state)?).save(&conn).await?;
TwoFactor::new(headers.user.uuid, type_, serde_json::to_string(&state)?).save(&mut conn).await?;
let mut challenge_value = serde_json::to_value(challenge.public_key)?;
challenge_value["status"] = "ok".into();
@@ -241,7 +242,12 @@ impl From<PublicKeyCredentialCopy> for PublicKeyCredential {
}
#[post("/two-factor/webauthn", data = "<data>")]
async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_webauthn(
data: JsonUpcase<EnableWebauthnData>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
) -> JsonResult {
let data: EnableWebauthnData = data.into_inner().data;
let mut user = headers.user;
@@ -251,10 +257,10 @@ async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Header
// Retrieve and delete the saved challenge state
let type_ = TwoFactorType::WebauthnRegisterChallenge as i32;
let state = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).await {
let state = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await {
Some(tf) => {
let state: RegistrationState = serde_json::from_str(&tf.data)?;
tf.delete(&conn).await?;
tf.delete(&mut conn).await?;
state
}
None => err!("Can't recover challenge"),
@@ -264,7 +270,7 @@ async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Header
let (credential, _data) =
WebauthnConfig::load().register_credential(&data.DeviceResponse.into(), &state, |_| Ok(false))?;
let mut registrations: Vec<_> = get_webauthn_registrations(&user.uuid, &conn).await?.1;
let mut registrations: Vec<_> = get_webauthn_registrations(&user.uuid, &mut conn).await?.1;
// TODO: Check for repeated ID's
registrations.push(WebauthnRegistration {
id: data.Id.into_i32()?,
@@ -276,9 +282,11 @@ async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Header
// Save the registrations and return them
TwoFactor::new(user.uuid.clone(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?)
.save(&conn)
.save(&mut conn)
.await?;
_generate_recover_code(&mut user, &conn).await;
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
let keys_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
@@ -289,8 +297,13 @@ async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Header
}
#[put("/two-factor/webauthn", data = "<data>")]
async fn activate_webauthn_put(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_webauthn(data, headers, conn).await
async fn activate_webauthn_put(
data: JsonUpcase<EnableWebauthnData>,
headers: Headers,
conn: DbConn,
ip: ClientIp,
) -> JsonResult {
activate_webauthn(data, headers, conn, ip).await
}
#[derive(Deserialize, Debug)]
@@ -301,17 +314,17 @@ struct DeleteU2FData {
}
#[delete("/two-factor/webauthn", data = "<data>")]
async fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let id = data.data.Id.into_i32()?;
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
let mut tf = match TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &conn).await
{
Some(tf) => tf,
None => err!("Webauthn data not found!"),
};
let mut tf =
match TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &mut conn).await {
Some(tf) => tf,
None => err!("Webauthn data not found!"),
};
let mut data: Vec<WebauthnRegistration> = serde_json::from_str(&tf.data)?;
@@ -322,11 +335,12 @@ async fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn
let removed_item = data.remove(item_pos);
tf.data = serde_json::to_string(&data)?;
tf.save(&conn).await?;
tf.save(&mut conn).await?;
drop(tf);
// If entry is migrated from u2f, delete the u2f entry as well
if let Some(mut u2f) = TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::U2f as i32, &conn).await
if let Some(mut u2f) =
TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::U2f as i32, &mut conn).await
{
let mut data: Vec<U2FRegistration> = match serde_json::from_str(&u2f.data) {
Ok(d) => d,
@@ -337,7 +351,7 @@ async fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn
let new_data_str = serde_json::to_string(&data)?;
u2f.data = new_data_str;
u2f.save(&conn).await?;
u2f.save(&mut conn).await?;
}
let keys_json: Vec<Value> = data.iter().map(WebauthnRegistration::to_json).collect();
@@ -351,7 +365,7 @@ async fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn
pub async fn get_webauthn_registrations(
user_uuid: &str,
conn: &DbConn,
conn: &mut DbConn,
) -> Result<(bool, Vec<WebauthnRegistration>), Error> {
let type_ = TwoFactorType::Webauthn as i32;
match TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await {
@@ -360,7 +374,7 @@ pub async fn get_webauthn_registrations(
}
}
pub async fn generate_webauthn_login(user_uuid: &str, conn: &DbConn) -> JsonResult {
pub async fn generate_webauthn_login(user_uuid: &str, conn: &mut DbConn) -> JsonResult {
// Load saved credentials
let creds: Vec<Credential> =
get_webauthn_registrations(user_uuid, conn).await?.1.into_iter().map(|r| r.credential).collect();
@@ -382,7 +396,7 @@ pub async fn generate_webauthn_login(user_uuid: &str, conn: &DbConn) -> JsonResu
Ok(Json(serde_json::to_value(response.public_key)?))
}
pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &DbConn) -> EmptyResult {
pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &mut DbConn) -> EmptyResult {
let type_ = TwoFactorType::WebauthnLoginChallenge as i32;
let state = match TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await {
Some(tf) => {
@@ -390,7 +404,12 @@ pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &DbC
tf.delete(conn).await?;
state
}
None => err!("Can't recover login challenge"),
None => err!(
"Can't recover login challenge",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
),
};
let rsp: crate::util::UpCase<PublicKeyCredentialCopy> = serde_json::from_str(response)?;
@@ -413,5 +432,10 @@ pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &DbC
}
}
err!("Credential not present")
err!(
"Credential not present",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}

View File

@@ -4,10 +4,13 @@ use serde_json::Value;
use yubico::{config::Config, verify};
use crate::{
api::{core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, PasswordData},
auth::Headers,
api::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordData,
},
auth::{ClientIp, Headers},
db::{
models::{TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
},
error::{Error, MapResult},
@@ -64,21 +67,23 @@ fn get_yubico_credentials() -> Result<(String, String), Error> {
}
}
fn verify_yubikey_otp(otp: String) -> EmptyResult {
async fn verify_yubikey_otp(otp: String) -> EmptyResult {
let (yubico_id, yubico_secret) = get_yubico_credentials()?;
let config = Config::default().set_client_id(yubico_id).set_key(yubico_secret);
match CONFIG.yubico_server() {
Some(server) => verify(otp, config.set_api_hosts(vec![server])),
None => verify(otp, config),
Some(server) => {
tokio::task::spawn_blocking(move || verify(otp, config.set_api_hosts(vec![server]))).await.unwrap()
}
None => tokio::task::spawn_blocking(move || verify(otp, config)).await.unwrap(),
}
.map_res("Failed to verify OTP")
.and(Ok(()))
}
#[post("/two-factor/get-yubikey", data = "<data>")]
async fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
// Make sure the credentials are set
get_yubico_credentials()?;
@@ -92,7 +97,7 @@ async fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, conn
let user_uuid = &user.uuid;
let yubikey_type = TwoFactorType::YubiKey as i32;
let r = TwoFactor::find_by_user_and_type(user_uuid, yubikey_type, &conn).await;
let r = TwoFactor::find_by_user_and_type(user_uuid, yubikey_type, &mut conn).await;
if let Some(r) = r {
let yubikey_metadata: YubikeyMetadata = serde_json::from_str(&r.data)?;
@@ -113,7 +118,12 @@ async fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, conn
}
#[post("/two-factor/yubikey", data = "<data>")]
async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_yubikey(
data: JsonUpcase<EnableYubikeyData>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
) -> JsonResult {
let data: EnableYubikeyData = data.into_inner().data;
let mut user = headers.user;
@@ -123,7 +133,7 @@ async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers,
// Check if we already have some data
let mut yubikey_data =
match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::YubiKey as i32, &conn).await {
match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::YubiKey as i32, &mut conn).await {
Some(data) => data,
None => TwoFactor::new(user.uuid.clone(), TwoFactorType::YubiKey, String::new()),
};
@@ -144,7 +154,7 @@ async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers,
continue;
}
verify_yubikey_otp(yubikey.to_owned()).map_res("Invalid Yubikey OTP provided")?;
verify_yubikey_otp(yubikey.to_owned()).await.map_res("Invalid Yubikey OTP provided")?;
}
let yubikey_ids: Vec<String> = yubikeys.into_iter().map(|x| (x[..12]).to_owned()).collect();
@@ -155,9 +165,11 @@ async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers,
};
yubikey_data.data = serde_json::to_string(&yubikey_metadata).unwrap();
yubikey_data.save(&conn).await?;
yubikey_data.save(&mut conn).await?;
_generate_recover_code(&mut user, &conn).await;
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
let mut result = jsonify_yubikeys(yubikey_metadata.Keys);
@@ -169,11 +181,16 @@ async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers,
}
#[put("/two-factor/yubikey", data = "<data>")]
async fn activate_yubikey_put(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_yubikey(data, headers, conn).await
async fn activate_yubikey_put(
data: JsonUpcase<EnableYubikeyData>,
headers: Headers,
conn: DbConn,
ip: ClientIp,
) -> JsonResult {
activate_yubikey(data, headers, conn, ip).await
}
pub fn validate_yubikey_login(response: &str, twofactor_data: &str) -> EmptyResult {
pub async fn validate_yubikey_login(response: &str, twofactor_data: &str) -> EmptyResult {
if response.len() != 44 {
err!("Invalid Yubikey OTP length");
}
@@ -185,7 +202,7 @@ pub fn validate_yubikey_login(response: &str, twofactor_data: &str) -> EmptyResu
err!("Given Yubikey is not registered");
}
let result = verify_yubikey_otp(response.to_owned());
let result = verify_yubikey_otp(response.to_owned()).await;
match result {
Ok(_answer) => Ok(()),

View File

@@ -30,10 +30,7 @@ use crate::{
pub fn routes() -> Vec<Route> {
match CONFIG.icon_service().as_str() {
"internal" => routes![icon_internal],
"bitwarden" => routes![icon_bitwarden],
"duckduckgo" => routes![icon_duckduckgo],
"google" => routes![icon_google],
_ => routes![icon_custom],
_ => routes![icon_external],
}
}
@@ -100,23 +97,8 @@ async fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
}
#[get("/<domain>/icon.png")]
async fn icon_custom(domain: String) -> Option<Redirect> {
icon_redirect(&domain, &CONFIG.icon_service()).await
}
#[get("/<domain>/icon.png")]
async fn icon_bitwarden(domain: String) -> Option<Redirect> {
icon_redirect(&domain, "https://icons.bitwarden.net/{}/icon.png").await
}
#[get("/<domain>/icon.png")]
async fn icon_duckduckgo(domain: String) -> Option<Redirect> {
icon_redirect(&domain, "https://icons.duckduckgo.com/ip3/{}.ico").await
}
#[get("/<domain>/icon.png")]
async fn icon_google(domain: String) -> Option<Redirect> {
icon_redirect(&domain, "https://www.google.com/s2/favicons?domain={}&sz=32").await
async fn icon_external(domain: String) -> Option<Redirect> {
icon_redirect(&domain, &CONFIG._icon_service_url()).await
}
#[get("/<domain>/icon.png")]
@@ -278,19 +260,9 @@ mod tests {
use cached::proc_macro::cached;
#[cached(key = "String", convert = r#"{ domain.to_string() }"#, size = 16, time = 60)]
#[allow(clippy::unused_async)] // This is needed because cached causes a false-positive here.
async fn is_domain_blacklisted(domain: &str) -> bool {
if CONFIG.icon_blacklist_non_global_ips() {
if let Ok(s) = lookup_host((domain, 0)).await {
for addr in s {
if !is_global(addr.ip()) {
debug!("IP {} for domain '{}' is not a global IP!", addr.ip(), domain);
return true;
}
}
}
}
// First check the blacklist regex if there is a match.
// This prevents the blocked domain(s) from being leaked via a DNS lookup.
if let Some(blacklist) = CONFIG.icon_blacklist_regex() {
// Use the pre-generate Regex stored in a Lazy HashMap if there's one, else generate it.
let is_match = if let Some(regex) = ICON_BLACKLIST_REGEX.get(&blacklist) {
@@ -315,6 +287,18 @@ async fn is_domain_blacklisted(domain: &str) -> bool {
return true;
}
}
if CONFIG.icon_blacklist_non_global_ips() {
if let Ok(s) = lookup_host((domain, 0)).await {
for addr in s {
if !is_global(addr.ip()) {
debug!("IP {} for domain '{}' is not a global IP!", addr.ip(), domain);
return true;
}
}
}
}
false
}
@@ -538,7 +522,7 @@ async fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// Create the iconlist
let mut iconlist: Vec<Icon> = Vec::new();
let mut referer = String::from("");
let mut referer = String::new();
if let Ok(content) = resp {
// Extract the URL from the respose in case redirects occured (like @ gitlab.com)

View File

@@ -9,28 +9,31 @@ use serde_json::Value;
use crate::{
api::{
core::accounts::{PreloginData, _prelogin},
core::accounts::{PreloginData, RegisterData, _prelogin, _register},
core::log_user_event,
core::two_factor::{duo, email, email::EmailTokenData, yubikey},
ApiResult, EmptyResult, JsonResult, JsonUpcase,
},
auth::ClientIp,
auth::{ClientHeaders, ClientIp},
db::{models::*, DbConn},
error::MapResult,
mail, util, CONFIG,
};
pub fn routes() -> Vec<Route> {
routes![login, prelogin]
routes![login, prelogin, identity_register]
}
#[post("/connect/token", data = "<data>")]
async fn login(data: Form<ConnectData>, conn: DbConn, ip: ClientIp) -> JsonResult {
async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn: DbConn, ip: ClientIp) -> JsonResult {
let data: ConnectData = data.into_inner();
match data.grant_type.as_ref() {
let mut user_uuid: Option<String> = None;
let login_result = match data.grant_type.as_ref() {
"refresh_token" => {
_check_is_some(&data.refresh_token, "refresh_token cannot be blank")?;
_refresh_login(data, conn).await
_refresh_login(data, &mut conn).await
}
"password" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
@@ -42,34 +45,56 @@ async fn login(data: Form<ConnectData>, conn: DbConn, ip: ClientIp) -> JsonResul
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_password_login(data, conn, &ip).await
_password_login(data, &mut user_uuid, &mut conn, &ip).await
}
"client_credentials" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
_check_is_some(&data.client_secret, "client_secret cannot be blank")?;
_check_is_some(&data.scope, "scope cannot be blank")?;
_api_key_login(data, conn, &ip).await
_api_key_login(data, &mut user_uuid, &mut conn, &ip).await
}
t => err!("Invalid type", t),
};
if let Some(user_uuid) = user_uuid {
match &login_result {
Ok(_) => {
log_user_event(
EventType::UserLoggedIn as i32,
&user_uuid,
client_header.device_type,
&ip.ip,
&mut conn,
)
.await;
}
Err(e) => {
if let Some(ev) = e.get_event() {
log_user_event(ev.event as i32, &user_uuid, client_header.device_type, &ip.ip, &mut conn).await
}
}
}
}
login_result
}
async fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
// Extract token
let token = data.refresh_token.unwrap();
// Get device by refresh token
let mut device = Device::find_by_refresh_token(&token, &conn).await.map_res("Invalid refresh token")?;
let mut device = Device::find_by_refresh_token(&token, conn).await.map_res("Invalid refresh token")?;
let scope = "api offline_access";
let scope_vec = vec!["api".into(), "offline_access".into()];
// Common
let user = User::find_by_uuid(&device.user_uuid, &conn).await.unwrap();
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn).await;
let user = User::find_by_uuid(&device.user_uuid, conn).await.unwrap();
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(&conn).await?;
device.save(conn).await?;
Ok(Json(json!({
"access_token": access_token,
@@ -87,7 +112,12 @@ async fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
})))
}
async fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult {
async fn _password_login(
data: ConnectData,
user_uuid: &mut Option<String>,
conn: &mut DbConn,
ip: &ClientIp,
) -> JsonResult {
// Validate scope
let scope = data.scope.as_ref().unwrap();
if scope != "api offline_access" {
@@ -100,20 +130,35 @@ async fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> Json
// Get the user
let username = data.username.as_ref().unwrap().trim();
let user = match User::find_by_mail(username, &conn).await {
let user = match User::find_by_mail(username, conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username)),
};
// Set the user_uuid here to be passed back used for event logging.
*user_uuid = Some(user.uuid.clone());
// Check password
let password = data.password.as_ref().unwrap();
if !user.check_valid_password(password) {
err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username))
err!(
"Username or password is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn,
}
)
}
// Check if the user is disabled
if !user.enabled {
err!("This user has been disabled", format!("IP: {}. Username: {}.", ip.ip, username))
err!(
"This user has been disabled",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
let now = Utc::now().naive_utc();
@@ -131,7 +176,7 @@ async fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> Json
user.last_verifying_at = Some(now);
user.login_verify_count += 1;
if let Err(e) = user.save(&conn).await {
if let Err(e) = user.save(conn).await {
error!("Error updating user: {:#?}", e);
}
@@ -142,27 +187,38 @@ async fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> Json
}
// We still want the login to fail until they actually verified the email address
err!("Please verify your email before trying again.", format!("IP: {}. Username: {}.", ip.ip, username))
err!(
"Please verify your email before trying again.",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
let (mut device, new_device) = get_device(&data, &conn, &user).await;
let (mut device, new_device) = get_device(&data, conn, &user).await;
let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, ip, &conn).await?;
let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, ip, conn).await?;
if CONFIG.mail_enabled() && new_device {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name).await {
error!("Error sending new device email: {:#?}", e);
if CONFIG.require_device_email() {
err!("Could not send login notification email. Please contact your administrator.")
err!(
"Could not send login notification email. Please contact your administrator.",
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
}
}
// Common
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn).await;
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(&conn).await?;
device.save(conn).await?;
let mut result = json!({
"access_token": access_token,
@@ -188,7 +244,12 @@ async fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> Json
Ok(Json(result))
}
async fn _api_key_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult {
async fn _api_key_login(
data: ConnectData,
user_uuid: &mut Option<String>,
conn: &mut DbConn,
ip: &ClientIp,
) -> JsonResult {
// Validate scope
let scope = data.scope.as_ref().unwrap();
if scope != "api" {
@@ -201,27 +262,42 @@ async fn _api_key_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonR
// Get the user via the client_id
let client_id = data.client_id.as_ref().unwrap();
let user_uuid = match client_id.strip_prefix("user.") {
let client_user_uuid = match client_id.strip_prefix("user.") {
Some(uuid) => uuid,
None => err!("Malformed client_id", format!("IP: {}.", ip.ip)),
};
let user = match User::find_by_uuid(user_uuid, &conn).await {
let user = match User::find_by_uuid(client_user_uuid, conn).await {
Some(user) => user,
None => err!("Invalid client_id", format!("IP: {}.", ip.ip)),
};
// Set the user_uuid here to be passed back used for event logging.
*user_uuid = Some(user.uuid.clone());
// Check if the user is disabled
if !user.enabled {
err!("This user has been disabled (API key login)", format!("IP: {}. Username: {}.", ip.ip, user.email))
err!(
"This user has been disabled (API key login)",
format!("IP: {}. Username: {}.", ip.ip, user.email),
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
// Check API key. Note that API key logins bypass 2FA.
let client_secret = data.client_secret.as_ref().unwrap();
if !user.check_valid_api_key(client_secret) {
err!("Incorrect client_secret", format!("IP: {}. Username: {}.", ip.ip, user.email))
err!(
"Incorrect client_secret",
format!("IP: {}. Username: {}.", ip.ip, user.email),
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
let (mut device, new_device) = get_device(&data, &conn, &user).await;
let (mut device, new_device) = get_device(&data, conn, &user).await;
if CONFIG.mail_enabled() && new_device {
let now = Utc::now().naive_utc();
@@ -229,15 +305,20 @@ async fn _api_key_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonR
error!("Error sending new device email: {:#?}", e);
if CONFIG.require_device_email() {
err!("Could not send login notification email. Please contact your administrator.")
err!(
"Could not send login notification email. Please contact your administrator.",
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
}
}
// Common
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn).await;
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(&conn).await?;
device.save(conn).await?;
info!("User {} logged in successfully via API key. IP: {}", user.email, ip.ip);
@@ -259,9 +340,10 @@ async fn _api_key_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonR
}
/// Retrieves an existing device or creates a new device from ConnectData and the User
async fn get_device(data: &ConnectData, conn: &DbConn, user: &User) -> (Device, bool) {
async fn get_device(data: &ConnectData, conn: &mut DbConn, user: &User) -> (Device, bool) {
// On iOS, device_type sends "iOS", on others it sends a number
let device_type = util::try_parse_string(data.device_type.as_ref()).unwrap_or(0);
// When unknown or unable to parse, return 14, which is 'Unknown Browser'
let device_type = util::try_parse_string(data.device_type.as_ref()).unwrap_or(14);
let device_id = data.device_identifier.clone().expect("No device id provided");
let device_name = data.device_name.clone().expect("No device name provided");
@@ -283,7 +365,7 @@ async fn twofactor_auth(
data: &ConnectData,
device: &mut Device,
ip: &ClientIp,
conn: &DbConn,
conn: &mut DbConn,
) -> ApiResult<Option<String>> {
let twofactors = TwoFactor::find_by_user(user_uuid, conn).await;
@@ -317,7 +399,7 @@ async fn twofactor_auth(
Some(TwoFactorType::Webauthn) => {
_tf::webauthn::validate_webauthn_login(user_uuid, twofactor_code, conn).await?
}
Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?)?,
Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?).await?,
Some(TwoFactorType::Duo) => {
_tf::duo::validate_duo_login(data.username.as_ref().unwrap().trim(), twofactor_code, conn).await?
}
@@ -338,7 +420,12 @@ async fn twofactor_auth(
}
}
}
_ => err!("Invalid two factor provider"),
_ => err!(
"Invalid two factor provider",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
),
}
TwoFactorIncomplete::mark_complete(user_uuid, &device.uuid, conn).await?;
@@ -355,7 +442,7 @@ fn _selected_data(tf: Option<TwoFactor>) -> ApiResult<String> {
tf.map(|t| t.data).map_res("Two factor doesn't exist")
}
async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> ApiResult<Value> {
async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbConn) -> ApiResult<Value> {
use crate::api::core::two_factor;
let mut result = json!({
@@ -434,6 +521,11 @@ async fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
_prelogin(data, conn).await
}
#[post("/accounts/register", data = "<data>")]
async fn identity_register(data: JsonUpcase<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, conn).await
}
// https://github.com/bitwarden/jslib/blob/master/common/src/models/request/tokenRequest.ts
// https://github.com/bitwarden/mobile/blob/master/src/Core/Models/Request/TokenRequest.cs
#[derive(Debug, Clone, Default, FromForm)]

View File

@@ -9,17 +9,22 @@ use rocket::serde::json::Json;
use serde_json::Value;
pub use crate::api::{
admin::catchers as admin_catchers,
admin::routes as admin_routes,
core::catchers as core_catchers,
core::purge_sends,
core::purge_trashed_ciphers,
core::routes as core_routes,
core::two_factor::send_incomplete_2fa_notifications,
core::{emergency_notification_reminder_job, emergency_request_timeout_job},
core::{event_cleanup_job, events_routes as core_events_routes},
icons::routes as icons_routes,
identity::routes as identity_routes,
notifications::routes as notifications_routes,
notifications::{start_notification_server, Notify, UpdateType},
web::catchers as web_catchers,
web::routes as web_routes,
web::static_files,
};
use crate::util;
@@ -30,6 +35,7 @@ pub type EmptyResult = ApiResult<()>;
type JsonUpcase<T> = Json<util::UpCase<T>>;
type JsonUpcaseVec<T> = Json<Vec<util::UpCase<T>>>;
type JsonVec<T> = Json<Vec<T>>;
// Common structs representing JSON data received
#[derive(Deserialize)]

View File

@@ -56,7 +56,7 @@ fn negotiate(_headers: Headers) -> Json<JsonValue> {
use crate::crypto;
use data_encoding::BASE64URL;
let conn_id = BASE64URL.encode(&crypto::get_random(vec![0u8; 16]));
let conn_id = crypto::encode_random_bytes::<16>(BASE64URL);
let mut available_transports: Vec<JsonValue> = Vec::new();
if CONFIG.websocket_enabled() {

View File

@@ -1,11 +1,10 @@
use std::path::{Path, PathBuf};
use rocket::serde::json::Json;
use rocket::{fs::NamedFile, http::ContentType, Route};
use rocket::{fs::NamedFile, http::ContentType, response::content::RawHtml as Html, serde::json::Json, Catcher, Route};
use serde_json::Value;
use crate::{
api::core::now,
api::{core::now, ApiResult},
error::Error,
util::{Cached, SafeString},
CONFIG,
@@ -21,6 +20,24 @@ pub fn routes() -> Vec<Route> {
}
}
pub fn catchers() -> Vec<Catcher> {
if CONFIG.web_vault_enabled() {
catchers![not_found]
} else {
catchers![]
}
}
#[catch(404)]
fn not_found() -> ApiResult<Html<String>> {
// Return the page
let json = json!({
"urlpath": CONFIG.domain_path()
});
let text = CONFIG.render_template("404", &json)?;
Ok(Html(text))
}
#[get("/")]
async fn web_index() -> Cached<Option<NamedFile>> {
Cached::short(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join("index.html")).await.ok(), false)
@@ -76,20 +93,22 @@ fn alive(_conn: DbConn) -> Json<String> {
}
#[get("/vw_static/<filename>")]
fn static_files(filename: String) -> Result<(ContentType, &'static [u8]), Error> {
pub fn static_files(filename: String) -> Result<(ContentType, &'static [u8]), Error> {
match filename.as_ref() {
"404.png" => Ok((ContentType::PNG, include_bytes!("../static/images/404.png"))),
"mail-github.png" => Ok((ContentType::PNG, include_bytes!("../static/images/mail-github.png"))),
"logo-gray.png" => Ok((ContentType::PNG, include_bytes!("../static/images/logo-gray.png"))),
"error-x.svg" => Ok((ContentType::SVG, include_bytes!("../static/images/error-x.svg"))),
"hibp.png" => Ok((ContentType::PNG, include_bytes!("../static/images/hibp.png"))),
"vaultwarden-icon.png" => Ok((ContentType::PNG, include_bytes!("../static/images/vaultwarden-icon.png"))),
"vaultwarden-favicon.png" => Ok((ContentType::PNG, include_bytes!("../static/images/vaultwarden-favicon.png"))),
"bootstrap.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/bootstrap.css"))),
"bootstrap-native.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap-native.js"))),
"identicon.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/identicon.js"))),
"jdenticon.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jdenticon.js"))),
"datatables.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))),
"datatables.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))),
"jquery-3.6.0.slim.js" => {
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.6.0.slim.js")))
"jquery-3.6.2.slim.js" => {
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.6.2.slim.js")))
}
_ => err!(format!("Static file not found: {}", filename)),
}

View File

@@ -1,18 +1,14 @@
//
// JWT Handling
//
use chrono::{Duration, Utc};
use num_traits::FromPrimitive;
use once_cell::sync::Lazy;
use jsonwebtoken::{self, Algorithm, DecodingKey, EncodingKey, Header};
use jsonwebtoken::{self, errors::ErrorKind, Algorithm, DecodingKey, EncodingKey, Header};
use serde::de::DeserializeOwned;
use serde::ser::Serialize;
use crate::{
error::{Error, MapResult},
CONFIG,
};
use crate::{error::Error, CONFIG};
const JWT_ALGORITHM: Algorithm = Algorithm::RS256;
@@ -29,13 +25,13 @@ static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.
static JWT_SEND_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|send", CONFIG.domain_origin()));
static PRIVATE_RSA_KEY_VEC: Lazy<Vec<u8>> = Lazy::new(|| {
std::fs::read(&CONFIG.private_rsa_key()).unwrap_or_else(|e| panic!("Error loading private RSA Key.\n{}", e))
std::fs::read(CONFIG.private_rsa_key()).unwrap_or_else(|e| panic!("Error loading private RSA Key.\n{}", e))
});
static PRIVATE_RSA_KEY: Lazy<EncodingKey> = Lazy::new(|| {
EncodingKey::from_rsa_pem(&PRIVATE_RSA_KEY_VEC).unwrap_or_else(|e| panic!("Error decoding private RSA Key.\n{}", e))
});
static PUBLIC_RSA_KEY_VEC: Lazy<Vec<u8>> = Lazy::new(|| {
std::fs::read(&CONFIG.public_rsa_key()).unwrap_or_else(|e| panic!("Error loading public RSA Key.\n{}", e))
std::fs::read(CONFIG.public_rsa_key()).unwrap_or_else(|e| panic!("Error loading public RSA Key.\n{}", e))
});
static PUBLIC_RSA_KEY: Lazy<DecodingKey> = Lazy::new(|| {
DecodingKey::from_rsa_pem(&PUBLIC_RSA_KEY_VEC).unwrap_or_else(|e| panic!("Error decoding public RSA Key.\n{}", e))
@@ -61,7 +57,15 @@ fn decode_jwt<T: DeserializeOwned>(token: &str, issuer: String) -> Result<T, Err
validation.set_issuer(&[issuer]);
let token = token.replace(char::is_whitespace, "");
jsonwebtoken::decode(&token, &PUBLIC_RSA_KEY, &validation).map(|d| d.claims).map_res("Error decoding JWT")
match jsonwebtoken::decode(&token, &PUBLIC_RSA_KEY, &validation) {
Ok(d) => Ok(d.claims),
Err(err) => match *err.kind() {
ErrorKind::InvalidToken => err!("Token is invalid"),
ErrorKind::InvalidIssuer => err!("Issuer is invalid"),
ErrorKind::ExpiredSignature => err!("Token has expired"),
_ => err!("Error decoding JWT"),
},
}
}
pub fn decode_login(token: &str) -> Result<LoginJwtClaims, Error> {
@@ -148,9 +152,10 @@ pub fn generate_invite_claims(
invited_by_email: Option<String>,
) -> InviteJwtClaims {
let time_now = Utc::now().naive_utc();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
InviteJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
iss: JWT_INVITE_ISSUER.to_string(),
sub: uuid,
email,
@@ -172,22 +177,23 @@ pub struct EmergencyAccessInviteJwtClaims {
pub sub: String,
pub email: String,
pub emer_id: Option<String>,
pub grantor_name: Option<String>,
pub grantor_email: Option<String>,
pub emer_id: String,
pub grantor_name: String,
pub grantor_email: String,
}
pub fn generate_emergency_access_invite_claims(
uuid: String,
email: String,
emer_id: Option<String>,
grantor_name: Option<String>,
grantor_email: Option<String>,
emer_id: String,
grantor_name: String,
grantor_email: String,
) -> EmergencyAccessInviteJwtClaims {
let time_now = Utc::now().naive_utc();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
EmergencyAccessInviteJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
iss: JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string(),
sub: uuid,
email,
@@ -211,9 +217,10 @@ pub struct BasicJwtClaims {
pub fn generate_delete_claims(uuid: String) -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
iss: JWT_DELETE_ISSUER.to_string(),
sub: uuid,
}
@@ -221,9 +228,10 @@ pub fn generate_delete_claims(uuid: String) -> BasicJwtClaims {
pub fn generate_verify_email_claims(uuid: String) -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
iss: JWT_VERIFYEMAIL_ISSUER.to_string(),
sub: uuid,
}
@@ -258,7 +266,7 @@ use rocket::{
};
use crate::db::{
models::{CollectionUser, Device, User, UserOrgStatus, UserOrgType, UserOrganization, UserStampException},
models::{Collection, Device, User, UserOrgStatus, UserOrgType, UserOrganization, UserStampException},
DbConn,
};
@@ -307,6 +315,28 @@ impl<'r> FromRequest<'r> for Host {
}
}
pub struct ClientHeaders {
pub host: String,
pub device_type: i32,
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for ClientHeaders {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let host = try_outcome!(Host::from_request(request).await).host;
// When unknown or unable to parse, return 14, which is 'Unknown Browser'
let device_type: i32 =
request.headers().get_one("device-type").map(|d| d.parse().unwrap_or(14)).unwrap_or_else(|| 14);
Outcome::Success(ClientHeaders {
host,
device_type,
})
}
}
pub struct Headers {
pub host: String,
pub device: Device,
@@ -340,17 +370,17 @@ impl<'r> FromRequest<'r> for Headers {
let device_uuid = claims.device;
let user_uuid = claims.sub;
let conn = match DbConn::from_request(request).await {
let mut conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
let device = match Device::find_by_uuid_and_user(&device_uuid, &user_uuid, &conn).await {
let device = match Device::find_by_uuid_and_user(&device_uuid, &user_uuid, &mut conn).await {
Some(device) => device,
None => err_handler!("Invalid device id"),
};
let user = match User::find_by_uuid(&user_uuid, &conn).await {
let user = match User::find_by_uuid(&user_uuid, &mut conn).await {
Some(user) => user,
None => err_handler!("Device has no user associated"),
};
@@ -372,7 +402,7 @@ impl<'r> FromRequest<'r> for Headers {
// This prevents checking this stamp exception for new requests.
let mut user = user;
user.reset_stamp_exception();
if let Err(e) = user.save(&conn).await {
if let Err(e) = user.save(&mut conn).await {
error!("Error updating user: {:#?}", e);
}
err_handler!("Stamp exception is expired")
@@ -430,13 +460,13 @@ impl<'r> FromRequest<'r> for OrgHeaders {
let headers = try_outcome!(Headers::from_request(request).await);
match get_org_id(request) {
Some(org_id) => {
let conn = match DbConn::from_request(request).await {
let mut conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
let user = headers.user;
let org_user = match UserOrganization::find_by_user_and_org(&user.uuid, &org_id, &conn).await {
let org_user = match UserOrganization::find_by_user_and_org(&user.uuid, &org_id, &mut conn).await {
Some(user) => {
if user.status == UserOrgStatus::Confirmed as i32 {
user
@@ -473,6 +503,7 @@ pub struct AdminHeaders {
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
pub client_version: Option<String>,
}
#[rocket::async_trait]
@@ -481,12 +512,14 @@ impl<'r> FromRequest<'r> for AdminHeaders {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
let client_version = request.headers().get_one("Bitwarden-Client-Version").map(String::from);
if headers.org_user_type >= UserOrgType::Admin {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
client_version,
})
} else {
err_handler!("You need to be Admin or Owner to call this endpoint")
@@ -542,18 +575,20 @@ impl<'r> FromRequest<'r> for ManagerHeaders {
if headers.org_user_type >= UserOrgType::Manager {
match get_col_id(request) {
Some(col_id) => {
let conn = match DbConn::from_request(request).await {
let mut conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
if !headers.org_user.has_full_access() {
match CollectionUser::find_by_collection_and_user(&col_id, &headers.org_user.user_uuid, &conn)
.await
{
Some(_) => (),
None => err_handler!("The current user isn't a manager for this collection"),
}
if !headers.org_user.has_full_access()
&& !Collection::has_access_by_collection_and_user_uuid(
&col_id,
&headers.org_user.user_uuid,
&mut conn,
)
.await
{
err_handler!("The current user isn't a manager for this collection")
}
}
_ => err_handler!("Error getting the collection id"),

View File

@@ -1,6 +1,7 @@
use std::process::exit;
use std::sync::RwLock;
use job_scheduler_ng::Schedule;
use once_cell::sync::Lazy;
use reqwest::Url;
@@ -84,7 +85,7 @@ macro_rules! make_config {
let mut builder = ConfigBuilder::default();
$($(
builder.$name = make_config! { @getenv &stringify!($name).to_uppercase(), $ty };
builder.$name = make_config! { @getenv paste::paste!(stringify!([<$name:upper>])), $ty };
)+)+
builder
@@ -104,7 +105,7 @@ macro_rules! make_config {
builder.$name = v.clone();
if self.$name.is_some() {
overrides.push(stringify!($name).to_uppercase());
overrides.push(paste::paste!(stringify!([<$name:upper>])).into());
}
}
)+)+
@@ -194,7 +195,7 @@ macro_rules! make_config {
element.insert("default".into(), serde_json::to_value(def.$name).unwrap());
element.insert("type".into(), (_get_form_type(stringify!($ty))).into());
element.insert("doc".into(), (_get_doc(concat!($($doc),+))).into());
element.insert("overridden".into(), (overriden.contains(&stringify!($name).to_uppercase())).into());
element.insert("overridden".into(), (overriden.contains(&paste::paste!(stringify!([<$name:upper>])).into())).into());
element
}),
)+
@@ -231,14 +232,23 @@ macro_rules! make_config {
/// We map over the string and remove all alphanumeric, _ and - characters.
/// This is the fastest way (within micro-seconds) instead of using a regex (which takes mili-seconds)
fn _privacy_mask(value: &str) -> String {
value.chars().map(|c|
match c {
c if c.is_alphanumeric() => '*',
'_' => '*',
'-' => '*',
_ => c
}
).collect::<String>()
let mut n: u16 = 0;
let mut colon_match = false;
value
.chars()
.map(|c| {
n += 1;
match c {
':' if n <= 11 => {
colon_match = true;
c
}
'/' if n <= 13 && colon_match => c,
',' => c,
_ => '*',
}
})
.collect::<String>()
}
serde_json::Value::Object({
@@ -365,11 +375,14 @@ make_config! {
/// Defaults to once every minute. Set blank to disable this job.
incomplete_2fa_schedule: String, false, def, "30 * * * * *".to_string();
/// Emergency notification reminder schedule |> Cron schedule of the job that sends expiration reminders to emergency access grantors.
/// Defaults to hourly. Set blank to disable this job.
emergency_notification_reminder_schedule: String, false, def, "0 5 * * * *".to_string();
/// Defaults to hourly. (3 minutes after the hour) Set blank to disable this job.
emergency_notification_reminder_schedule: String, false, def, "0 3 * * * *".to_string();
/// Emergency request timeout schedule |> Cron schedule of the job that grants emergency access requests that have met the required wait time.
/// Defaults to hourly. Set blank to disable this job.
emergency_request_timeout_schedule: String, false, def, "0 5 * * * *".to_string();
/// Defaults to hourly. (7 minutes after the hour) Set blank to disable this job.
emergency_request_timeout_schedule: String, false, def, "0 7 * * * *".to_string();
/// Event cleanup schedule |> Cron schedule of the job that cleans old events from the event table.
/// Defaults to daily. Set blank to disable this job.
event_cleanup_schedule: String, false, def, "0 10 0 * * *".to_string();
},
/// General settings
@@ -424,12 +437,17 @@ make_config! {
/// If signups require email verification, limit how many emails are automatically sent when login is attempted (0 means no limit)
signups_verify_resend_limit: u32, true, def, 6;
/// Email domain whitelist |> Allow signups only from this list of comma-separated domains, even when signups are otherwise disabled
signups_domains_whitelist: String, true, def, "".to_string();
signups_domains_whitelist: String, true, def, String::new();
/// Enable event logging |> Enables event logging for organizations.
org_events_enabled: bool, false, def, false;
/// Org creation users |> Allow org creation only by this list of comma-separated user emails.
/// Blank or 'all' means all users can create orgs; 'none' means no users can create orgs.
org_creation_users: String, true, def, "".to_string();
org_creation_users: String, true, def, String::new();
/// Allow invitations |> Controls whether users can be invited by organization admins, even when signups are otherwise disabled
invitations_allowed: bool, true, def, true;
/// Invitation token expiration time (in hours) |> The number of hours after which an organization invite token, emergency access invite token,
/// email verification token and deletion request token will expire (must be at least 1)
invitation_expiration_hours: u32, false, def, 120;
/// Allow emergency access |> Controls whether users can enable emergency access to their accounts. This setting applies globally to all users.
emergency_access_allowed: bool, true, def, true;
/// Password iterations |> Number of server-side passwords hashing iterations.
@@ -447,6 +465,9 @@ make_config! {
/// Invitation organization name |> Name shown in the invitation emails that don't come from a specific organization
invitation_org_name: String, true, def, "Vaultwarden".to_string();
/// Events days retain |> Number of days to retain events stored in the database. If unset, events are kept indefently.
events_days_retain: i64, false, option;
},
/// Advanced settings
@@ -463,6 +484,10 @@ make_config! {
/// service is set, an icon request to Vaultwarden will return an HTTP redirect to the
/// corresponding icon at the external service.
icon_service: String, false, def, "internal".to_string();
/// _icon_service_url
_icon_service_url: String, false, gen, |c| generate_icon_service_url(&c.icon_service);
/// _icon_service_csp
_icon_service_csp: String, false, gen, |c| generate_icon_service_csp(&c.icon_service, &c._icon_service_url);
/// Icon redirect code |> The HTTP status code to use for redirects to an external icon service.
/// The supported codes are 301 (legacy permanent), 302 (legacy temporary), 307 (temporary), and 308 (permanent).
/// Temporary redirects are useful while testing different icon services, but once a service
@@ -522,10 +547,10 @@ make_config! {
database_max_conns: u32, false, def, 10;
/// Database connection init |> SQL statements to run when creating a new database connection, mainly useful for connection-scoped pragmas. If empty, a database-specific default is used.
database_conn_init: String, false, def, "".to_string();
database_conn_init: String, false, def, String::new();
/// Bypass admin page security (Know the risks!) |> Disables the Admin Token for the admin page so you may use your own auth in-front
disable_admin_token: bool, true, def, false;
disable_admin_token: bool, false, def, false;
/// Allowed iframe ancestors (Know the risks!) |> Allows other domains to embed the web vault into an iframe, useful for embedding into secure intranets
allowed_iframe_ancestors: String, true, def, String::new();
@@ -535,10 +560,13 @@ make_config! {
/// Max burst size for login requests |> Allow a burst of requests of up to this size, while maintaining the average indicated by `login_ratelimit_seconds`. Note that this applies to both the login and the 2FA, so it's recommended to allow a burst size of at least 2
login_ratelimit_max_burst: u32, false, def, 10;
/// Seconds between admin requests |> Number of seconds, on average, between admin requests from the same IP address before rate limiting kicks in
/// Seconds between admin login requests |> Number of seconds, on average, between admin requests from the same IP address before rate limiting kicks in
admin_ratelimit_seconds: u64, false, def, 300;
/// Max burst size for login requests |> Allow a burst of requests of up to this size, while maintaining the average indicated by `admin_ratelimit_seconds`
/// Max burst size for admin login requests |> Allow a burst of requests of up to this size, while maintaining the average indicated by `admin_ratelimit_seconds`
admin_ratelimit_max_burst: u32, false, def, 3;
/// Enable groups (BETA!) (Know the risks!) |> Enables groups support for organizations (Currently contains known issues!).
org_groups_enabled: bool, false, def, false;
},
/// Yubikey settings
@@ -595,6 +623,10 @@ make_config! {
smtp_timeout: u64, true, def, 15;
/// Server name sent during HELO |> By default this value should be is on the machine's hostname, but might need to be changed in case it trips some anti-spam filters
helo_name: String, true, option;
/// Embed images as email attachments.
smtp_embed_images: bool, true, def, true;
/// _smtp_img_src
_smtp_img_src: String, false, gen, |c| generate_smtp_img_src(c.smtp_embed_images, &c.domain);
/// Enable SMTP debugging (Know the risks!) |> DANGEROUS: Enabling this will output very detailed SMTP messages. This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!
smtp_debug: bool, false, def, false;
/// Accept Invalid Certs (Know the risks!) |> DANGEROUS: Allow invalid certificates. This option introduces significant vulnerabilities to man-in-the-middle attacks!
@@ -618,7 +650,15 @@ make_config! {
fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
// Validate connection URL is valid and DB feature is enabled
DbConnType::from_url(&cfg.database_url)?;
let url = &cfg.database_url;
if DbConnType::from_url(url)? == DbConnType::sqlite && url.contains('/') {
let path = std::path::Path::new(&url);
if let Some(parent) = path.parent() {
if !parent.is_dir() {
err!(format!("SQLite database directory `{}` does not exist or is not a directory", parent.display()));
}
}
}
let limit = 256;
if cfg.database_max_conns < 1 || cfg.database_max_conns > limit {
@@ -722,6 +762,39 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
_ => err!("Only HTTP 301/302 and 307/308 redirects are supported"),
}
if cfg.invitation_expiration_hours < 1 {
err!("`INVITATION_EXPIRATION_HOURS` has a minimum duration of 1 hour")
}
// Validate schedule crontab format
if !cfg.send_purge_schedule.is_empty() && cfg.send_purge_schedule.parse::<Schedule>().is_err() {
err!("`SEND_PURGE_SCHEDULE` is not a valid cron expression")
}
if !cfg.trash_purge_schedule.is_empty() && cfg.trash_purge_schedule.parse::<Schedule>().is_err() {
err!("`TRASH_PURGE_SCHEDULE` is not a valid cron expression")
}
if !cfg.incomplete_2fa_schedule.is_empty() && cfg.incomplete_2fa_schedule.parse::<Schedule>().is_err() {
err!("`INCOMPLETE_2FA_SCHEDULE` is not a valid cron expression")
}
if !cfg.emergency_notification_reminder_schedule.is_empty()
&& cfg.emergency_notification_reminder_schedule.parse::<Schedule>().is_err()
{
err!("`EMERGENCY_NOTIFICATION_REMINDER_SCHEDULE` is not a valid cron expression")
}
if !cfg.emergency_request_timeout_schedule.is_empty()
&& cfg.emergency_request_timeout_schedule.parse::<Schedule>().is_err()
{
err!("`EMERGENCY_REQUEST_TIMEOUT_SCHEDULE` is not a valid cron expression")
}
if !cfg.event_cleanup_schedule.is_empty() && cfg.event_cleanup_schedule.parse::<Schedule>().is_err() {
err!("`EVENT_CLEANUP_SCHEDULE` is not a valid cron expression")
}
Ok(())
}
@@ -748,6 +821,42 @@ fn extract_url_path(url: &str) -> String {
}
}
fn generate_smtp_img_src(embed_images: bool, domain: &str) -> String {
if embed_images {
"cid:".to_string()
} else {
format!("{domain}/vw_static/")
}
}
/// Generate the correct URL for the icon service.
/// This will be used within icons.rs to call the external icon service.
fn generate_icon_service_url(icon_service: &str) -> String {
match icon_service {
"internal" => String::new(),
"bitwarden" => "https://icons.bitwarden.net/{}/icon.png".to_string(),
"duckduckgo" => "https://icons.duckduckgo.com/ip3/{}.ico".to_string(),
"google" => "https://www.google.com/s2/favicons?domain={}&sz=32".to_string(),
_ => icon_service.to_string(),
}
}
/// Generate the CSP string needed to allow redirected icon fetching
fn generate_icon_service_csp(icon_service: &str, icon_service_url: &str) -> String {
// We split on the first '{', since that is the variable delimiter for an icon service URL.
// Everything up until the first '{' should be fixed and can be used as an CSP string.
let csp_string = match icon_service_url.split_once('{') {
Some((c, _)) => c.to_string(),
None => String::new(),
};
// Because Google does a second redirect to there gstatic.com domain, we need to add an extra csp string.
match icon_service {
"google" => csp_string + " https://*.gstatic.com/favicon",
_ => csp_string,
}
}
/// Convert the old SMTP_SSL and SMTP_EXPLICIT_TLS options
fn smtp_convert_deprecated_ssl_options(smtp_ssl: Option<bool>, smtp_explicit_tls: Option<bool>) -> String {
if smtp_explicit_tls.is_some() || smtp_ssl.is_some() {
@@ -909,8 +1018,7 @@ impl Config {
if let Some(akey) = self._duo_akey() {
akey
} else {
let akey = crate::crypto::get_random_64();
let akey_s = data_encoding::BASE64.encode(&akey);
let akey_s = crate::crypto::encode_random_bytes::<64>(data_encoding::BASE64);
// Save the new value
let builder = ConfigBuilder {
@@ -1027,6 +1135,8 @@ where
reg!("admin/organizations");
reg!("admin/diagnostics");
reg!("404");
// And then load user templates to overwrite the defaults
// Use .hbs extension for the files
// Templates get registered with their relative name
@@ -1046,7 +1156,7 @@ fn case_helper<'reg, 'rc>(
let value = param.value().clone();
if h.params().iter().skip(1).any(|x| x.value() == &value) {
h.template().map(|t| t.render(r, ctx, rc, out)).unwrap_or(Ok(()))
h.template().map(|t| t.render(r, ctx, rc, out)).unwrap_or_else(|| Ok(()))
} else {
Ok(())
}

View File

@@ -3,7 +3,7 @@
//
use std::num::NonZeroU32;
use data_encoding::HEXLOWER;
use data_encoding::{Encoding, HEXLOWER};
use ring::{digest, hmac, pbkdf2};
static DIGEST_ALG: pbkdf2::Algorithm = pbkdf2::PBKDF2_HMAC_SHA256;
@@ -37,18 +37,21 @@ pub fn hmac_sign(key: &str, data: &str) -> String {
// Random values
//
pub fn get_random_64() -> Vec<u8> {
get_random(vec![0u8; 64])
}
pub fn get_random(mut array: Vec<u8>) -> Vec<u8> {
/// Return an array holding `N` random bytes.
pub fn get_random_bytes<const N: usize>() -> [u8; N] {
use ring::rand::{SecureRandom, SystemRandom};
let mut array = [0; N];
SystemRandom::new().fill(&mut array).expect("Error generating random values");
array
}
/// Encode random bytes using the provided function.
pub fn encode_random_bytes<const N: usize>(e: Encoding) -> String {
e.encode(&get_random_bytes::<N>())
}
/// Generates a random string over a specified alphabet.
pub fn get_random_string(alphabet: &[u8], num_chars: usize) -> String {
// Ref: https://rust-lang-nursery.github.io/rust-cookbook/algorithms/randomness.html
@@ -77,18 +80,18 @@ pub fn get_random_string_alphanum(num_chars: usize) -> String {
get_random_string(ALPHABET, num_chars)
}
pub fn generate_id(num_bytes: usize) -> String {
HEXLOWER.encode(&get_random(vec![0; num_bytes]))
pub fn generate_id<const N: usize>() -> String {
encode_random_bytes::<N>(HEXLOWER)
}
pub fn generate_send_id() -> String {
// Send IDs are globally scoped, so make them longer to avoid collisions.
generate_id(32) // 256 bits
generate_id::<32>() // 256 bits
}
pub fn generate_attachment_id() -> String {
// Attachment IDs are scoped to a cipher, so they can be smaller.
generate_id(10) // 80 bits
generate_id::<10>() // 80 bits
}
/// Generates a numeric token for email-based verifications.

View File

@@ -125,7 +125,6 @@ macro_rules! generate_connections {
impl DbPool {
// For the given database URL, guess its type, run migrations, create pool, and return it
#[allow(clippy::diverging_sub_expression)]
pub fn from_config() -> Result<Self, Error> {
let url = CONFIG.database_url();
let conn_type = DbConnType::from_url(&url)?;
@@ -182,12 +181,20 @@ macro_rules! generate_connections {
};
}
#[cfg(not(query_logger))]
generate_connections! {
sqlite: diesel::sqlite::SqliteConnection,
mysql: diesel::mysql::MysqlConnection,
postgresql: diesel::pg::PgConnection
}
#[cfg(query_logger)]
generate_connections! {
sqlite: diesel_logger::LoggingConnection<diesel::sqlite::SqliteConnection>,
mysql: diesel_logger::LoggingConnection<diesel::mysql::MysqlConnection>,
postgresql: diesel_logger::LoggingConnection<diesel::pg::PgConnection>
}
impl DbConnType {
pub fn from_url(url: &str) -> Result<DbConnType, Error> {
// Mysql
@@ -228,8 +235,8 @@ impl DbConnType {
pub fn default_init_stmts(&self) -> String {
match self {
Self::sqlite => "PRAGMA busy_timeout = 5000; PRAGMA synchronous = NORMAL;".to_string(),
Self::mysql => "".to_string(),
Self::postgresql => "".to_string(),
Self::mysql => String::new(),
Self::postgresql => String::new(),
}
}
}
@@ -365,7 +372,7 @@ pub mod models;
/// Creates a back-up of the sqlite database
/// MySQL/MariaDB and PostgreSQL are not supported.
pub async fn backup_database(conn: &DbConn) -> Result<(), Error> {
pub async fn backup_database(conn: &mut DbConn) -> Result<(), Error> {
db_run! {@raw conn:
postgresql, mysql {
let _ = conn;
@@ -383,15 +390,19 @@ pub async fn backup_database(conn: &DbConn) -> Result<(), Error> {
}
/// Get the SQL Server version
pub async fn get_sql_server_version(conn: &DbConn) -> String {
pub async fn get_sql_server_version(conn: &mut DbConn) -> String {
db_run! {@raw conn:
postgresql, mysql {
no_arg_sql_function!(version, diesel::sql_types::Text);
diesel::select(version).get_result::<String>(conn).unwrap_or_else(|_| "Unknown".to_string())
sql_function!{
fn version() -> diesel::sql_types::Text;
}
diesel::select(version()).get_result::<String>(conn).unwrap_or_else(|_| "Unknown".to_string())
}
sqlite {
no_arg_sql_function!(sqlite_version, diesel::sql_types::Text);
diesel::select(sqlite_version).get_result::<String>(conn).unwrap_or_else(|_| "Unknown".to_string())
sql_function!{
fn sqlite_version() -> diesel::sql_types::Text;
}
diesel::select(sqlite_version()).get_result::<String>(conn).unwrap_or_else(|_| "Unknown".to_string())
}
}
}
@@ -416,68 +427,64 @@ impl<'r> FromRequest<'r> for DbConn {
// https://docs.rs/diesel_migrations/*/diesel_migrations/macro.embed_migrations.html
#[cfg(sqlite)]
mod sqlite_migrations {
embed_migrations!("migrations/sqlite");
use diesel_migrations::{EmbeddedMigrations, MigrationHarness};
pub const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations/sqlite");
pub fn run_migrations() -> Result<(), super::Error> {
// Make sure the directory exists
let url = crate::CONFIG.database_url();
let path = std::path::Path::new(&url);
if let Some(parent) = path.parent() {
if std::fs::create_dir_all(parent).is_err() {
error!("Error creating database directory");
std::process::exit(1);
}
}
use diesel::{Connection, RunQueryDsl};
// Make sure the database is up to date (create if it doesn't exist, or run the migrations)
let connection = diesel::sqlite::SqliteConnection::establish(&crate::CONFIG.database_url())?;
// Disable Foreign Key Checks during migration
let url = crate::CONFIG.database_url();
// Establish a connection to the sqlite database (this will create a new one, if it does
// not exist, and exit if there is an error).
let mut connection = diesel::sqlite::SqliteConnection::establish(&url)?;
// Run the migrations after successfully establishing a connection
// Disable Foreign Key Checks during migration
// Scoped to a connection.
diesel::sql_query("PRAGMA foreign_keys = OFF")
.execute(&connection)
.execute(&mut connection)
.expect("Failed to disable Foreign Key Checks during migrations");
// Turn on WAL in SQLite
if crate::CONFIG.enable_db_wal() {
diesel::sql_query("PRAGMA journal_mode=wal").execute(&connection).expect("Failed to turn on WAL");
diesel::sql_query("PRAGMA journal_mode=wal").execute(&mut connection).expect("Failed to turn on WAL");
}
embedded_migrations::run_with_output(&connection, &mut std::io::stdout())?;
connection.run_pending_migrations(MIGRATIONS).expect("Error running migrations");
Ok(())
}
}
#[cfg(mysql)]
mod mysql_migrations {
embed_migrations!("migrations/mysql");
use diesel_migrations::{EmbeddedMigrations, MigrationHarness};
pub const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations/mysql");
pub fn run_migrations() -> Result<(), super::Error> {
use diesel::{Connection, RunQueryDsl};
// Make sure the database is up to date (create if it doesn't exist, or run the migrations)
let connection = diesel::mysql::MysqlConnection::establish(&crate::CONFIG.database_url())?;
let mut connection = diesel::mysql::MysqlConnection::establish(&crate::CONFIG.database_url())?;
// Disable Foreign Key Checks during migration
// Scoped to a connection/session.
diesel::sql_query("SET FOREIGN_KEY_CHECKS = 0")
.execute(&connection)
.execute(&mut connection)
.expect("Failed to disable Foreign Key Checks during migrations");
embedded_migrations::run_with_output(&connection, &mut std::io::stdout())?;
connection.run_pending_migrations(MIGRATIONS).expect("Error running migrations");
Ok(())
}
}
#[cfg(postgresql)]
mod postgresql_migrations {
embed_migrations!("migrations/postgresql");
use diesel_migrations::{EmbeddedMigrations, MigrationHarness};
pub const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations/postgresql");
pub fn run_migrations() -> Result<(), super::Error> {
use diesel::{Connection, RunQueryDsl};
// Make sure the database is up to date (create if it doesn't exist, or run the migrations)
let connection = diesel::pg::PgConnection::establish(&crate::CONFIG.database_url())?;
let mut connection = diesel::pg::PgConnection::establish(&crate::CONFIG.database_url())?;
// Disable Foreign Key Checks during migration
// FIXME: Per https://www.postgresql.org/docs/12/sql-set-constraints.html,
@@ -487,10 +494,10 @@ mod postgresql_migrations {
// Migrations that need to disable foreign key checks should run this
// from within the migration script itself.
diesel::sql_query("SET CONSTRAINTS ALL DEFERRED")
.execute(&connection)
.execute(&mut connection)
.expect("Failed to disable Foreign Key Checks during migrations");
embedded_migrations::run_with_output(&connection, &mut std::io::stdout())?;
connection.run_pending_migrations(MIGRATIONS).expect("Error running migrations");
Ok(())
}
}

View File

@@ -6,9 +6,9 @@ use crate::CONFIG;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "attachments"]
#[changeset_options(treat_none_as_null="true")]
#[primary_key(id)]
#[diesel(table_name = attachments)]
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(id))]
pub struct Attachment {
pub id: String,
pub cipher_uuid: String,
@@ -58,7 +58,7 @@ use crate::error::MapResult;
/// Database methods
impl Attachment {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(attachments::table)
@@ -90,7 +90,7 @@ impl Attachment {
}
}
pub async fn delete(&self, conn: &DbConn) -> EmptyResult {
pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
crate::util::retry(
|| diesel::delete(attachments::table.filter(attachments::id.eq(&self.id))).execute(conn),
@@ -114,14 +114,14 @@ impl Attachment {
}}
}
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult {
for attachment in Attachment::find_by_cipher(cipher_uuid, conn).await {
attachment.delete(conn).await?;
}
Ok(())
}
pub async fn find_by_id(id: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_id(id: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
attachments::table
.filter(attachments::id.eq(id.to_lowercase()))
@@ -131,7 +131,7 @@ impl Attachment {
}}
}
pub async fn find_by_cipher(cipher_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
attachments::table
.filter(attachments::cipher_uuid.eq(cipher_uuid))
@@ -141,7 +141,7 @@ impl Attachment {
}}
}
pub async fn size_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
pub async fn size_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -153,7 +153,7 @@ impl Attachment {
}}
}
pub async fn count_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
pub async fn count_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -164,7 +164,7 @@ impl Attachment {
}}
}
pub async fn size_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
pub async fn size_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -176,7 +176,7 @@ impl Attachment {
}}
}
pub async fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -187,7 +187,7 @@ impl Attachment {
}}
}
pub async fn find_all_by_ciphers(cipher_uuids: &Vec<String>, conn: &DbConn) -> Vec<Self> {
pub async fn find_all_by_ciphers(cipher_uuids: &Vec<String>, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
attachments::table
.filter(attachments::cipher_uuid.eq_any(cipher_uuids))

View File

@@ -2,7 +2,9 @@ use crate::CONFIG;
use chrono::{Duration, NaiveDateTime, Utc};
use serde_json::Value;
use super::{Attachment, CollectionCipher, Favorite, FolderCipher, User, UserOrgStatus, UserOrgType, UserOrganization};
use super::{
Attachment, CollectionCipher, Favorite, FolderCipher, Group, User, UserOrgStatus, UserOrgType, UserOrganization,
};
use crate::api::core::CipherSyncData;
@@ -10,9 +12,9 @@ use std::borrow::Cow;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "ciphers"]
#[changeset_options(treat_none_as_null="true")]
#[primary_key(uuid)]
#[diesel(table_name = ciphers)]
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))]
pub struct Cipher {
pub uuid: String,
pub created_at: NaiveDateTime,
@@ -85,7 +87,7 @@ impl Cipher {
host: &str,
user_uuid: &str,
cipher_sync_data: Option<&CipherSyncData>,
conn: &DbConn,
conn: &mut DbConn,
) -> Value {
use crate::util::format_date;
@@ -146,7 +148,7 @@ impl Cipher {
Cow::from(Vec::with_capacity(0))
}
} else {
Cow::from(self.get_collections(user_uuid, conn).await)
Cow::from(self.get_collections(user_uuid.to_string(), conn).await)
};
// There are three types of cipher response models in upstream
@@ -160,6 +162,7 @@ impl Cipher {
"Object": "cipherDetails",
"Id": self.uuid,
"Type": self.atype,
"CreationDate": format_date(&self.created_at),
"RevisionDate": format_date(&self.updated_at),
"DeletedDate": self.deleted_at.map_or(Value::Null, |d| Value::String(format_date(&d))),
"FolderId": if let Some(cipher_sync_data) = cipher_sync_data { cipher_sync_data.cipher_folders.get(&self.uuid).map(|c| c.to_string() ) } else { self.get_folder_uuid(user_uuid, conn).await },
@@ -207,7 +210,7 @@ impl Cipher {
json_object
}
pub async fn update_users_revision(&self, conn: &DbConn) -> Vec<String> {
pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<String> {
let mut user_uuids = Vec::new();
match self.user_uuid {
Some(ref user_uuid) => {
@@ -227,7 +230,7 @@ impl Cipher {
user_uuids
}
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
self.updated_at = Utc::now().naive_utc();
@@ -262,7 +265,7 @@ impl Cipher {
}
}
pub async fn delete(&self, conn: &DbConn) -> EmptyResult {
pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
FolderCipher::delete_all_by_cipher(&self.uuid, conn).await?;
@@ -277,7 +280,7 @@ impl Cipher {
}}
}
pub async fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
// TODO: Optimize this by executing a DELETE directly on the database, instead of first fetching.
for cipher in Self::find_by_org(org_uuid, conn).await {
cipher.delete(conn).await?;
@@ -285,7 +288,7 @@ impl Cipher {
Ok(())
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
for cipher in Self::find_owned_by_user(user_uuid, conn).await {
cipher.delete(conn).await?;
}
@@ -293,7 +296,7 @@ impl Cipher {
}
/// Purge all ciphers that are old enough to be auto-deleted.
pub async fn purge_trash(conn: &DbConn) {
pub async fn purge_trash(conn: &mut DbConn) {
if let Some(auto_delete_days) = CONFIG.trash_auto_delete_days() {
let now = Utc::now().naive_utc();
let dt = now - Duration::days(auto_delete_days);
@@ -303,7 +306,7 @@ impl Cipher {
}
}
pub async fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn).await;
match (self.get_folder_uuid(user_uuid, conn).await, folder_uuid) {
@@ -336,11 +339,11 @@ impl Cipher {
}
/// Returns whether this cipher is owned by an org in which the user has full access.
pub async fn is_in_full_access_org(
async fn is_in_full_access_org(
&self,
user_uuid: &str,
cipher_sync_data: Option<&CipherSyncData>,
conn: &DbConn,
conn: &mut DbConn,
) -> bool {
if let Some(ref org_uuid) = self.organization_uuid {
if let Some(cipher_sync_data) = cipher_sync_data {
@@ -354,6 +357,23 @@ impl Cipher {
false
}
/// Returns whether this cipher is owned by an group in which the user has full access.
async fn is_in_full_access_group(
&self,
user_uuid: &str,
cipher_sync_data: Option<&CipherSyncData>,
conn: &mut DbConn,
) -> bool {
if let Some(ref org_uuid) = self.organization_uuid {
if let Some(cipher_sync_data) = cipher_sync_data {
return cipher_sync_data.user_group_full_access_for_organizations.get(org_uuid).is_some();
} else {
return Group::is_in_full_access_group(user_uuid, org_uuid, conn).await;
}
}
false
}
/// Returns the user's access restrictions to this cipher. A return value
/// of None means that this cipher does not belong to the user, and is
/// not in any collection the user has access to. Otherwise, the user has
@@ -363,12 +383,15 @@ impl Cipher {
&self,
user_uuid: &str,
cipher_sync_data: Option<&CipherSyncData>,
conn: &DbConn,
conn: &mut DbConn,
) -> Option<(bool, bool)> {
// Check whether this cipher is directly owned by the user, or is in
// a collection that the user has full access to. If so, there are no
// access restrictions.
if self.is_owned_by_user(user_uuid) || self.is_in_full_access_org(user_uuid, cipher_sync_data, conn).await {
if self.is_owned_by_user(user_uuid)
|| self.is_in_full_access_org(user_uuid, cipher_sync_data, conn).await
|| self.is_in_full_access_group(user_uuid, cipher_sync_data, conn).await
{
return Some((false, false));
}
@@ -376,14 +399,22 @@ impl Cipher {
let mut rows: Vec<(bool, bool)> = Vec::new();
if let Some(collections) = cipher_sync_data.cipher_collections.get(&self.uuid) {
for collection in collections {
//User permissions
if let Some(uc) = cipher_sync_data.user_collections.get(collection) {
rows.push((uc.read_only, uc.hide_passwords));
}
//Group permissions
if let Some(cg) = cipher_sync_data.user_collections_groups.get(collection) {
rows.push((cg.read_only, cg.hide_passwords));
}
}
}
rows
} else {
self.get_collections_access_flags(user_uuid, conn).await
let mut access_flags = self.get_user_collections_access_flags(user_uuid, conn).await;
access_flags.append(&mut self.get_group_collections_access_flags(user_uuid, conn).await);
access_flags
};
if rows.is_empty() {
@@ -410,7 +441,7 @@ impl Cipher {
Some((read_only, hide_passwords))
}
pub async fn get_collections_access_flags(&self, user_uuid: &str, conn: &DbConn) -> Vec<(bool, bool)> {
async fn get_user_collections_access_flags(&self, user_uuid: &str, conn: &mut DbConn) -> Vec<(bool, bool)> {
db_run! {conn: {
// Check whether this cipher is in any collections accessible to the
// user. If so, retrieve the access flags for each collection.
@@ -423,35 +454,58 @@ impl Cipher {
.and(users_collections::user_uuid.eq(user_uuid))))
.select((users_collections::read_only, users_collections::hide_passwords))
.load::<(bool, bool)>(conn)
.expect("Error getting access restrictions")
.expect("Error getting user access restrictions")
}}
}
pub async fn is_write_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
async fn get_group_collections_access_flags(&self, user_uuid: &str, conn: &mut DbConn) -> Vec<(bool, bool)> {
db_run! {conn: {
ciphers::table
.filter(ciphers::uuid.eq(&self.uuid))
.inner_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid)
))
.inner_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid)
))
.inner_join(groups_users::table.on(
groups_users::groups_uuid.eq(collections_groups::groups_uuid)
))
.inner_join(users_organizations::table.on(
users_organizations::uuid.eq(groups_users::users_organizations_uuid)
))
.filter(users_organizations::user_uuid.eq(user_uuid))
.select((collections_groups::read_only, collections_groups::hide_passwords))
.load::<(bool, bool)>(conn)
.expect("Error getting group access restrictions")
}}
}
pub async fn is_write_accessible_to_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
match self.get_access_restrictions(user_uuid, None, conn).await {
Some((read_only, _hide_passwords)) => !read_only,
None => false,
}
}
pub async fn is_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
pub async fn is_accessible_to_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
self.get_access_restrictions(user_uuid, None, conn).await.is_some()
}
// Returns whether this cipher is a favorite of the specified user.
pub async fn is_favorite(&self, user_uuid: &str, conn: &DbConn) -> bool {
pub async fn is_favorite(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
Favorite::is_favorite(&self.uuid, user_uuid, conn).await
}
// Sets whether this cipher is a favorite of the specified user.
pub async fn set_favorite(&self, favorite: Option<bool>, user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn set_favorite(&self, favorite: Option<bool>, user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
match favorite {
None => Ok(()), // No change requested.
Some(status) => Favorite::set_favorite(status, &self.uuid, user_uuid, conn).await,
}
}
pub async fn get_folder_uuid(&self, user_uuid: &str, conn: &DbConn) -> Option<String> {
pub async fn get_folder_uuid(&self, user_uuid: &str, conn: &mut DbConn) -> Option<String> {
db_run! {conn: {
folders_ciphers::table
.inner_join(folders::table)
@@ -463,7 +517,7 @@ impl Cipher {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::uuid.eq(uuid))
@@ -476,16 +530,16 @@ impl Cipher {
// Find all ciphers accessible or visible to the specified user.
//
// "Accessible" means the user has read access to the cipher, either via
// direct ownership or via collection access.
// direct ownership, collection or via group access.
//
// "Visible" usually means the same as accessible, except when an org
// owner/admin sets their account to have access to only selected
// owner/admin sets their account or group to have access to only selected
// collections in the org (presumably because they aren't interested in
// the other collections in the org). In this case, if `visible_only` is
// true, then the non-interesting ciphers will not be returned. As a
// result, those ciphers will not appear in "My Vault" for the org
// owner/admin, but they can still be accessed via the org vault view.
pub async fn find_by_user(user_uuid: &str, visible_only: bool, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, visible_only: bool, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
let mut query = ciphers::table
.left_join(ciphers_collections::table.on(
@@ -501,9 +555,22 @@ impl Cipher {
// Ensure that users_collections::user_uuid is NULL for unconfirmed users.
.and(users_organizations::user_uuid.eq(users_collections::user_uuid))
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid).and(
collections_groups::groups_uuid.eq(groups::uuid)
)
))
.filter(ciphers::user_uuid.eq(user_uuid)) // Cipher owner
.or_filter(users_organizations::access_all.eq(true)) // access_all in org
.or_filter(users_collections::user_uuid.eq(user_uuid)) // Access to collection
.or_filter(groups::access_all.eq(true)) // Access via groups
.or_filter(collections_groups::collections_uuid.is_not_null()) // Access via groups
.into_boxed();
if !visible_only {
@@ -520,12 +587,12 @@ impl Cipher {
}
// Find all ciphers visible to the specified user.
pub async fn find_by_user_visible(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user_visible(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
Self::find_by_user(user_uuid, true, conn).await
}
// Find all ciphers directly owned by the specified user.
pub async fn find_owned_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_owned_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
ciphers::table
.filter(
@@ -536,7 +603,7 @@ impl Cipher {
}}
}
pub async fn count_owned_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
pub async fn count_owned_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! {conn: {
ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid))
@@ -547,7 +614,7 @@ impl Cipher {
}}
}
pub async fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
@@ -555,7 +622,7 @@ impl Cipher {
}}
}
pub async fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! {conn: {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
@@ -566,7 +633,7 @@ impl Cipher {
}}
}
pub async fn find_by_folder(folder_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_folder(folder_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
folders_ciphers::table.inner_join(ciphers::table)
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -576,7 +643,7 @@ impl Cipher {
}
/// Find all ciphers that were deleted before the specified datetime.
pub async fn find_deleted_before(dt: &NaiveDateTime, conn: &DbConn) -> Vec<Self> {
pub async fn find_deleted_before(dt: &NaiveDateTime, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::deleted_at.lt(dt))
@@ -584,7 +651,7 @@ impl Cipher {
}}
}
pub async fn get_collections(&self, user_id: &str, conn: &DbConn) -> Vec<String> {
pub async fn get_collections(&self, user_id: String, conn: &mut DbConn) -> Vec<String> {
db_run! {conn: {
ciphers_collections::table
.inner_join(collections::table.on(
@@ -592,12 +659,12 @@ impl Cipher {
))
.inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid).and(
users_organizations::user_uuid.eq(user_id)
users_organizations::user_uuid.eq(user_id.clone())
)
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid).and(
users_collections::user_uuid.eq(user_id)
users_collections::user_uuid.eq(user_id.clone())
)
))
.filter(ciphers_collections::cipher_uuid.eq(&self.uuid))
@@ -613,7 +680,7 @@ impl Cipher {
/// Return a Vec with (cipher_uuid, collection_uuid)
/// This is used during a full sync so we only need one query for all collections accessible.
pub async fn get_collections_with_cipher_by_user(user_id: &str, conn: &DbConn) -> Vec<(String, String)> {
pub async fn get_collections_with_cipher_by_user(user_id: String, conn: &mut DbConn) -> Vec<(String, String)> {
db_run! {conn: {
ciphers_collections::table
.inner_join(collections::table.on(
@@ -621,19 +688,30 @@ impl Cipher {
))
.inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid).and(
users_organizations::user_uuid.eq(user_id)
users_organizations::user_uuid.eq(user_id.clone())
)
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid).and(
users_collections::user_uuid.eq(user_id)
users_collections::user_uuid.eq(user_id.clone())
)
))
.filter(users_collections::user_uuid.eq(user_id).or( // User has access to collection
users_organizations::access_all.eq(true).or( // User has access all
users_organizations::atype.le(UserOrgType::Admin as i32) // User is admin or owner
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid).and(
collections_groups::groups_uuid.eq(groups::uuid)
)
))
.or_filter(users_collections::user_uuid.eq(user_id)) // User has access to collection
.or_filter(users_organizations::access_all.eq(true)) // User has access all
.or_filter(users_organizations::atype.le(UserOrgType::Admin as i32)) // User is admin or owner
.or_filter(groups::access_all.eq(true)) //Access via group
.or_filter(collections_groups::collections_uuid.is_not_null()) //Access via group
.select(ciphers_collections::all_columns)
.load::<(String, String)>(conn).unwrap_or_default()
}}

View File

@@ -1,11 +1,11 @@
use serde_json::Value;
use super::{User, UserOrgStatus, UserOrgType, UserOrganization};
use super::{CollectionGroup, User, UserOrgStatus, UserOrgType, UserOrganization};
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "collections"]
#[primary_key(uuid)]
#[diesel(table_name = collections)]
#[diesel(primary_key(uuid))]
pub struct Collection {
pub uuid: String,
pub org_uuid: String,
@@ -13,8 +13,8 @@ db_object! {
}
#[derive(Identifiable, Queryable, Insertable)]
#[table_name = "users_collections"]
#[primary_key(user_uuid, collection_uuid)]
#[diesel(table_name = users_collections)]
#[diesel(primary_key(user_uuid, collection_uuid))]
pub struct CollectionUser {
pub user_uuid: String,
pub collection_uuid: String,
@@ -23,8 +23,8 @@ db_object! {
}
#[derive(Identifiable, Queryable, Insertable)]
#[table_name = "ciphers_collections"]
#[primary_key(cipher_uuid, collection_uuid)]
#[diesel(table_name = ciphers_collections)]
#[diesel(primary_key(cipher_uuid, collection_uuid))]
pub struct CollectionCipher {
pub cipher_uuid: String,
pub collection_uuid: String,
@@ -56,7 +56,7 @@ impl Collection {
&self,
user_uuid: &str,
cipher_sync_data: Option<&crate::api::core::CipherSyncData>,
conn: &DbConn,
conn: &mut DbConn,
) -> Value {
let (read_only, hide_passwords) = if let Some(cipher_sync_data) = cipher_sync_data {
match cipher_sync_data.user_organizations.get(&self.org_uuid) {
@@ -89,7 +89,7 @@ use crate::error::MapResult;
/// Database methods
impl Collection {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
db_run! { conn:
@@ -123,10 +123,11 @@ impl Collection {
}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
CollectionCipher::delete_all_by_collection(&self.uuid, conn).await?;
CollectionUser::delete_all_by_collection(&self.uuid, conn).await?;
CollectionGroup::delete_all_by_collection(&self.uuid, conn).await?;
db_run! { conn: {
diesel::delete(collections::table.filter(collections::uuid.eq(self.uuid)))
@@ -135,20 +136,20 @@ impl Collection {
}}
}
pub async fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
for collection in Self::find_by_organization(org_uuid, conn).await {
collection.delete(conn).await?;
}
Ok(())
}
pub async fn update_users_revision(&self, conn: &DbConn) {
pub async fn update_users_revision(&self, conn: &mut DbConn) {
for user_org in UserOrganization::find_by_collection_and_org(&self.uuid, &self.org_uuid, conn).await.iter() {
User::update_uuid_revision(&user_org.user_uuid, conn).await;
}
}
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
collections::table
.filter(collections::uuid.eq(uuid))
@@ -158,17 +159,28 @@ impl Collection {
}}
}
pub async fn find_by_user_uuid(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user_uuid(user_uuid: String, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid)
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
users_organizations::user_uuid.eq(user_uuid.clone())
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(
@@ -177,17 +189,40 @@ impl Collection {
.filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true) // access_all in Organization
).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null()
)
)
).select(collections::all_columns)
)
.select(collections::all_columns)
.distinct()
.load::<CollectionDb>(conn).expect("Error loading collections").from_db()
}}
}
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &DbConn) -> Vec<Self> {
Self::find_by_user_uuid(user_uuid, conn).await.into_iter().filter(|c| c.org_uuid == org_uuid).collect()
// Check if a user has access to a specific collection
// FIXME: This needs to be reviewed. The query used by `find_by_user_uuid` could be adjusted to filter when needed.
// For now this is a good solution without making to much changes.
pub async fn has_access_by_collection_and_user_uuid(
collection_uuid: &str,
user_uuid: &str,
conn: &mut DbConn,
) -> bool {
Self::find_by_user_uuid(user_uuid.to_owned(), conn).await.into_iter().any(|c| c.uuid == collection_uuid)
}
pub async fn find_by_organization(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
Self::find_by_user_uuid(user_uuid.to_owned(), conn)
.await
.into_iter()
.filter(|c| c.org_uuid == org_uuid)
.collect()
}
pub async fn find_by_organization(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
collections::table
.filter(collections::org_uuid.eq(org_uuid))
@@ -197,7 +232,7 @@ impl Collection {
}}
}
pub async fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
collections::table
.filter(collections::uuid.eq(uuid))
@@ -209,12 +244,12 @@ impl Collection {
}}
}
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: String, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid)
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
@@ -222,11 +257,27 @@ impl Collection {
users_organizations::user_uuid.eq(user_uuid)
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(collections::uuid.eq(uuid))
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
)).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null()
)
)
).select(collections::all_columns)
@@ -235,7 +286,7 @@ impl Collection {
}}
}
pub async fn is_writable_by_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
pub async fn is_writable_by_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
match UserOrganization::find_by_user_and_org(user_uuid, &self.org_uuid, conn).await {
None => false, // Not in Org
Some(user_org) => {
@@ -257,7 +308,7 @@ impl Collection {
}
}
pub async fn hide_passwords_for_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
pub async fn hide_passwords_for_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
match UserOrganization::find_by_user_and_org(user_uuid, &self.org_uuid, conn).await {
None => true, // Not in Org
Some(user_org) => {
@@ -282,7 +333,7 @@ impl Collection {
/// Database methods
impl CollectionUser {
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::user_uuid.eq(user_uuid))
@@ -300,7 +351,7 @@ impl CollectionUser {
collection_uuid: &str,
read_only: bool,
hide_passwords: bool,
conn: &DbConn,
conn: &mut DbConn,
) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn).await;
@@ -353,7 +404,7 @@ impl CollectionUser {
}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn).await;
db_run! { conn: {
@@ -367,7 +418,7 @@ impl CollectionUser {
}}
}
pub async fn find_by_collection(collection_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_collection(collection_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid))
@@ -378,7 +429,11 @@ impl CollectionUser {
}}
}
pub async fn find_by_collection_and_user(collection_uuid: &str, user_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_collection_and_user(
collection_uuid: &str,
user_uuid: &str,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid))
@@ -390,7 +445,7 @@ impl CollectionUser {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::user_uuid.eq(user_uuid))
@@ -401,7 +456,7 @@ impl CollectionUser {
}}
}
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &mut DbConn) -> EmptyResult {
for collection in CollectionUser::find_by_collection(collection_uuid, conn).await.iter() {
User::update_uuid_revision(&collection.user_uuid, conn).await;
}
@@ -413,7 +468,7 @@ impl CollectionUser {
}}
}
pub async fn delete_all_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
let collectionusers = Self::find_by_organization_and_user_uuid(org_uuid, user_uuid, conn).await;
db_run! { conn: {
@@ -432,7 +487,7 @@ impl CollectionUser {
/// Database methods
impl CollectionCipher {
pub async fn save(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn save(cipher_uuid: &str, collection_uuid: &str, conn: &mut DbConn) -> EmptyResult {
Self::update_users_revision(collection_uuid, conn).await;
db_run! { conn:
@@ -462,7 +517,7 @@ impl CollectionCipher {
}
}
pub async fn delete(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete(cipher_uuid: &str, collection_uuid: &str, conn: &mut DbConn) -> EmptyResult {
Self::update_users_revision(collection_uuid, conn).await;
db_run! { conn: {
@@ -476,7 +531,7 @@ impl CollectionCipher {
}}
}
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
@@ -484,7 +539,7 @@ impl CollectionCipher {
}}
}
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::collection_uuid.eq(collection_uuid)))
.execute(conn)
@@ -492,7 +547,7 @@ impl CollectionCipher {
}}
}
pub async fn update_users_revision(collection_uuid: &str, conn: &DbConn) {
pub async fn update_users_revision(collection_uuid: &str, conn: &mut DbConn) {
if let Some(collection) = Collection::find_by_uuid(collection_uuid, conn).await {
collection.update_users_revision(conn).await;
}

View File

@@ -4,9 +4,9 @@ use crate::CONFIG;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "devices"]
#[changeset_options(treat_none_as_null="true")]
#[primary_key(uuid, user_uuid)]
#[diesel(table_name = devices)]
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid, user_uuid))]
pub struct Device {
pub uuid: String,
pub created_at: NaiveDateTime,
@@ -48,7 +48,7 @@ impl Device {
use crate::crypto;
use data_encoding::BASE64;
let twofactor_remember = BASE64.encode(&crypto::get_random(vec![0u8; 180]));
let twofactor_remember = crypto::encode_random_bytes::<180>(BASE64);
self.twofactor_remember = Some(twofactor_remember.clone());
twofactor_remember
@@ -69,7 +69,7 @@ impl Device {
use crate::crypto;
use data_encoding::BASE64URL;
self.refresh_token = BASE64URL.encode(&crypto::get_random_64());
self.refresh_token = crypto::encode_random_bytes::<64>(BASE64URL);
}
// Update the expiration of the device and the last update date
@@ -116,7 +116,7 @@ use crate::error::MapResult;
/// Database methods
impl Device {
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
self.updated_at = Utc::now().naive_utc();
db_run! { conn:
@@ -136,7 +136,7 @@ impl Device {
}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(devices::table.filter(devices::user_uuid.eq(user_uuid)))
.execute(conn)
@@ -144,7 +144,7 @@ impl Device {
}}
}
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::uuid.eq(uuid))
@@ -155,7 +155,7 @@ impl Device {
}}
}
pub async fn find_by_refresh_token(refresh_token: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_refresh_token(refresh_token: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::refresh_token.eq(refresh_token))
@@ -165,7 +165,7 @@ impl Device {
}}
}
pub async fn find_latest_active_by_user(user_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_latest_active_by_user(user_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::user_uuid.eq(user_uuid))

View File

@@ -1,13 +1,15 @@
use chrono::{NaiveDateTime, Utc};
use serde_json::Value;
use crate::{api::EmptyResult, db::DbConn, error::MapResult};
use super::User;
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "emergency_access"]
#[changeset_options(treat_none_as_null="true")]
#[primary_key(uuid)]
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = emergency_access)]
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))]
pub struct EmergencyAccess {
pub uuid: String,
pub grantor_uuid: String,
@@ -27,14 +29,14 @@ db_object! {
/// Local methods
impl EmergencyAccess {
pub fn new(grantor_uuid: String, email: Option<String>, status: i32, atype: i32, wait_time_days: i32) -> Self {
pub fn new(grantor_uuid: String, email: String, status: i32, atype: i32, wait_time_days: i32) -> Self {
let now = Utc::now().naive_utc();
Self {
uuid: crate::util::get_uuid(),
grantor_uuid,
grantee_uuid: None,
email,
email: Some(email),
status,
atype,
wait_time_days,
@@ -54,14 +56,6 @@ impl EmergencyAccess {
}
}
pub fn has_type(&self, access_type: EmergencyAccessType) -> bool {
self.atype == access_type as i32
}
pub fn has_status(&self, status: EmergencyAccessStatus) -> bool {
self.status == status as i32
}
pub fn to_json(&self) -> Value {
json!({
"Id": self.uuid,
@@ -72,7 +66,7 @@ impl EmergencyAccess {
})
}
pub async fn to_json_grantor_details(&self, conn: &DbConn) -> Value {
pub async fn to_json_grantor_details(&self, conn: &mut DbConn) -> Value {
let grantor_user = User::find_by_uuid(&self.grantor_uuid, conn).await.expect("Grantor user not found.");
json!({
@@ -87,8 +81,7 @@ impl EmergencyAccess {
})
}
#[allow(clippy::manual_map)]
pub async fn to_json_grantee_details(&self, conn: &DbConn) -> Value {
pub async fn to_json_grantee_details(&self, conn: &mut DbConn) -> Value {
let grantee_user = if let Some(grantee_uuid) = self.grantee_uuid.as_deref() {
Some(User::find_by_uuid(grantee_uuid, conn).await.expect("Grantee user not found."))
} else if let Some(email) = self.email.as_deref() {
@@ -110,7 +103,7 @@ impl EmergencyAccess {
}
}
#[derive(Copy, Clone, PartialEq, Eq, num_derive::FromPrimitive)]
#[derive(Copy, Clone)]
pub enum EmergencyAccessType {
View = 0,
Takeover = 1,
@@ -126,18 +119,6 @@ impl EmergencyAccessType {
}
}
impl PartialEq<i32> for EmergencyAccessType {
fn eq(&self, other: &i32) -> bool {
*other == *self as i32
}
}
impl PartialEq<EmergencyAccessType> for i32 {
fn eq(&self, other: &EmergencyAccessType) -> bool {
*self == *other as i32
}
}
pub enum EmergencyAccessStatus {
Invited = 0,
Accepted = 1,
@@ -148,13 +129,8 @@ pub enum EmergencyAccessStatus {
// region Database methods
use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
impl EmergencyAccess {
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
User::update_uuid_revision(&self.grantor_uuid, conn).await;
self.updated_at = Utc::now().naive_utc();
@@ -189,7 +165,46 @@ impl EmergencyAccess {
}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn update_access_status_and_save(
&mut self,
status: i32,
date: &NaiveDateTime,
conn: &mut DbConn,
) -> EmptyResult {
// Update the grantee so that it will refresh it's status.
User::update_uuid_revision(self.grantee_uuid.as_ref().expect("Error getting grantee"), conn).await;
self.status = status;
self.updated_at = date.to_owned();
db_run! {conn: {
crate::util::retry(|| {
diesel::update(emergency_access::table.filter(emergency_access::uuid.eq(&self.uuid)))
.set((emergency_access::status.eq(status), emergency_access::updated_at.eq(date)))
.execute(conn)
}, 10)
.map_res("Error updating emergency access status")
}}
}
pub async fn update_last_notification_date_and_save(
&mut self,
date: &NaiveDateTime,
conn: &mut DbConn,
) -> EmptyResult {
self.last_notification_at = Some(date.to_owned());
self.updated_at = date.to_owned();
db_run! {conn: {
crate::util::retry(|| {
diesel::update(emergency_access::table.filter(emergency_access::uuid.eq(&self.uuid)))
.set((emergency_access::last_notification_at.eq(date), emergency_access::updated_at.eq(date)))
.execute(conn)
}, 10)
.map_res("Error updating emergency access status")
}}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
for ea in Self::find_all_by_grantor_uuid(user_uuid, conn).await {
ea.delete(conn).await?;
}
@@ -199,7 +214,7 @@ impl EmergencyAccess {
Ok(())
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
User::update_uuid_revision(&self.grantor_uuid, conn).await;
db_run! { conn: {
@@ -209,7 +224,7 @@ impl EmergencyAccess {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::uuid.eq(uuid))
@@ -222,7 +237,7 @@ impl EmergencyAccess {
grantor_uuid: &str,
grantee_uuid: &str,
email: &str,
conn: &DbConn,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
emergency_access::table
@@ -233,15 +248,16 @@ impl EmergencyAccess {
}}
}
pub async fn find_all_recoveries(conn: &DbConn) -> Vec<Self> {
pub async fn find_all_recoveries_initiated(conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::status.eq(EmergencyAccessStatus::RecoveryInitiated as i32))
.filter(emergency_access::recovery_initiated_at.is_not_null())
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
pub async fn find_by_uuid_and_grantor_uuid(uuid: &str, grantor_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_grantor_uuid(uuid: &str, grantor_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::uuid.eq(uuid))
@@ -251,7 +267,7 @@ impl EmergencyAccess {
}}
}
pub async fn find_all_by_grantee_uuid(grantee_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_all_by_grantee_uuid(grantee_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::grantee_uuid.eq(grantee_uuid))
@@ -259,7 +275,7 @@ impl EmergencyAccess {
}}
}
pub async fn find_invited_by_grantee_email(grantee_email: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_invited_by_grantee_email(grantee_email: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::email.eq(grantee_email))
@@ -269,7 +285,7 @@ impl EmergencyAccess {
}}
}
pub async fn find_all_by_grantor_uuid(grantor_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_all_by_grantor_uuid(grantor_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))

318
src/db/models/event.rs Normal file
View File

@@ -0,0 +1,318 @@
use crate::db::DbConn;
use serde_json::Value;
use crate::{api::EmptyResult, error::MapResult, CONFIG};
use chrono::{Duration, NaiveDateTime, Utc};
// https://bitwarden.com/help/event-logs/
db_object! {
// Upstream: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Services/Implementations/EventService.cs
// Upstream: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Api/Models/Public/Response/EventResponseModel.cs
// Upstream SQL: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Sql/dbo/Tables/Event.sql
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = event)]
#[diesel(primary_key(uuid))]
pub struct Event {
pub uuid: String,
pub event_type: i32, // EventType
pub user_uuid: Option<String>,
pub org_uuid: Option<String>,
pub cipher_uuid: Option<String>,
pub collection_uuid: Option<String>,
pub group_uuid: Option<String>,
pub org_user_uuid: Option<String>,
pub act_user_uuid: Option<String>,
// Upstream enum: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Enums/DeviceType.cs
pub device_type: Option<i32>,
pub ip_address: Option<String>,
pub event_date: NaiveDateTime,
pub policy_uuid: Option<String>,
pub provider_uuid: Option<String>,
pub provider_user_uuid: Option<String>,
pub provider_org_uuid: Option<String>,
}
}
// Upstream enum: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Enums/EventType.cs
#[derive(Debug, Copy, Clone)]
pub enum EventType {
// User
UserLoggedIn = 1000,
UserChangedPassword = 1001,
UserUpdated2fa = 1002,
UserDisabled2fa = 1003,
UserRecovered2fa = 1004,
UserFailedLogIn = 1005,
UserFailedLogIn2fa = 1006,
UserClientExportedVault = 1007,
// UserUpdatedTempPassword = 1008, // Not supported
// UserMigratedKeyToKeyConnector = 1009, // Not supported
// Cipher
CipherCreated = 1100,
CipherUpdated = 1101,
CipherDeleted = 1102,
CipherAttachmentCreated = 1103,
CipherAttachmentDeleted = 1104,
CipherShared = 1105,
CipherUpdatedCollections = 1106,
CipherClientViewed = 1107,
CipherClientToggledPasswordVisible = 1108,
CipherClientToggledHiddenFieldVisible = 1109,
CipherClientToggledCardCodeVisible = 1110,
CipherClientCopiedPassword = 1111,
CipherClientCopiedHiddenField = 1112,
CipherClientCopiedCardCode = 1113,
CipherClientAutofilled = 1114,
CipherSoftDeleted = 1115,
CipherRestored = 1116,
CipherClientToggledCardNumberVisible = 1117,
// Collection
CollectionCreated = 1300,
CollectionUpdated = 1301,
CollectionDeleted = 1302,
// Group
GroupCreated = 1400,
GroupUpdated = 1401,
GroupDeleted = 1402,
// OrganizationUser
OrganizationUserInvited = 1500,
OrganizationUserConfirmed = 1501,
OrganizationUserUpdated = 1502,
OrganizationUserRemoved = 1503,
OrganizationUserUpdatedGroups = 1504,
// OrganizationUserUnlinkedSso = 1505, // Not supported
// OrganizationUserResetPasswordEnroll = 1506, // Not supported
// OrganizationUserResetPasswordWithdraw = 1507, // Not supported
// OrganizationUserAdminResetPassword = 1508, // Not supported
// OrganizationUserResetSsoLink = 1509, // Not supported
// OrganizationUserFirstSsoLogin = 1510, // Not supported
OrganizationUserRevoked = 1511,
OrganizationUserRestored = 1512,
// Organization
OrganizationUpdated = 1600,
OrganizationPurgedVault = 1601,
OrganizationClientExportedVault = 1602,
// OrganizationVaultAccessed = 1603,
// OrganizationEnabledSso = 1604, // Not supported
// OrganizationDisabledSso = 1605, // Not supported
// OrganizationEnabledKeyConnector = 1606, // Not supported
// OrganizationDisabledKeyConnector = 1607, // Not supported
// OrganizationSponsorshipsSynced = 1608, // Not supported
// Policy
PolicyUpdated = 1700,
// Provider (Not yet supported)
// ProviderUserInvited = 1800, // Not supported
// ProviderUserConfirmed = 1801, // Not supported
// ProviderUserUpdated = 1802, // Not supported
// ProviderUserRemoved = 1803, // Not supported
// ProviderOrganizationCreated = 1900, // Not supported
// ProviderOrganizationAdded = 1901, // Not supported
// ProviderOrganizationRemoved = 1902, // Not supported
// ProviderOrganizationVaultAccessed = 1903, // Not supported
}
/// Local methods
impl Event {
pub fn new(event_type: i32, event_date: Option<NaiveDateTime>) -> Self {
let event_date = match event_date {
Some(d) => d,
None => Utc::now().naive_utc(),
};
Self {
uuid: crate::util::get_uuid(),
event_type,
user_uuid: None,
org_uuid: None,
cipher_uuid: None,
collection_uuid: None,
group_uuid: None,
org_user_uuid: None,
act_user_uuid: None,
device_type: None,
ip_address: None,
event_date,
policy_uuid: None,
provider_uuid: None,
provider_user_uuid: None,
provider_org_uuid: None,
}
}
pub fn to_json(&self) -> Value {
use crate::util::format_date;
json!({
"type": self.event_type,
"userId": self.user_uuid,
"organizationId": self.org_uuid,
"cipherId": self.cipher_uuid,
"collectionId": self.collection_uuid,
"groupId": self.group_uuid,
"organizationUserId": self.org_user_uuid,
"actingUserId": self.act_user_uuid,
"date": format_date(&self.event_date),
"deviceType": self.device_type,
"ipAddress": self.ip_address,
"policyId": self.policy_uuid,
"providerId": self.provider_uuid,
"providerUserId": self.provider_user_uuid,
"providerOrganizationId": self.provider_org_uuid,
// "installationId": null, // Not supported
})
}
}
/// Database methods
/// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Services/Implementations/EventService.cs
impl Event {
pub const PAGE_SIZE: i64 = 30;
/// #############
/// Basic Queries
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn:
sqlite, mysql {
diesel::replace_into(event::table)
.values(EventDb::to_db(self))
.execute(conn)
.map_res("Error saving event")
}
postgresql {
diesel::insert_into(event::table)
.values(EventDb::to_db(self))
.on_conflict(event::uuid)
.do_update()
.set(EventDb::to_db(self))
.execute(conn)
.map_res("Error saving event")
}
}
}
pub async fn save_user_event(events: Vec<Event>, conn: &mut DbConn) -> EmptyResult {
// Special save function which is able to handle multiple events.
// SQLite doesn't support the DEFAULT argument, and does not support inserting multiple values at the same time.
// MySQL and PostgreSQL do.
// We also ignore duplicate if they ever will exists, else it could break the whole flow.
db_run! { conn:
// Unfortunately SQLite does not support inserting multiple records at the same time
// We loop through the events here and insert them one at a time.
sqlite {
for event in events {
diesel::insert_or_ignore_into(event::table)
.values(EventDb::to_db(&event))
.execute(conn)
.unwrap_or_default();
}
Ok(())
}
mysql {
let events: Vec<EventDb> = events.iter().map(EventDb::to_db).collect();
diesel::insert_or_ignore_into(event::table)
.values(&events)
.execute(conn)
.unwrap_or_default();
Ok(())
}
postgresql {
let events: Vec<EventDb> = events.iter().map(EventDb::to_db).collect();
diesel::insert_into(event::table)
.values(&events)
.on_conflict_do_nothing()
.execute(conn)
.unwrap_or_default();
Ok(())
}
}
}
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(event::table.filter(event::uuid.eq(self.uuid)))
.execute(conn)
.map_res("Error deleting event")
}}
}
/// ##############
/// Custom Queries
pub async fn find_by_organization_uuid(
org_uuid: &str,
start: &NaiveDateTime,
end: &NaiveDateTime,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: {
event::table
.filter(event::org_uuid.eq(org_uuid))
.filter(event::event_date.between(start, end))
.order_by(event::event_date.desc())
.limit(Self::PAGE_SIZE)
.load::<EventDb>(conn)
.expect("Error filtering events")
.from_db()
}}
}
pub async fn find_by_org_and_user_org(
org_uuid: &str,
user_org_uuid: &str,
start: &NaiveDateTime,
end: &NaiveDateTime,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: {
event::table
.inner_join(users_organizations::table.on(users_organizations::uuid.eq(user_org_uuid)))
.filter(event::org_uuid.eq(org_uuid))
.filter(event::event_date.between(start, end))
.filter(event::user_uuid.eq(users_organizations::user_uuid.nullable()).or(event::act_user_uuid.eq(users_organizations::user_uuid.nullable())))
.select(event::all_columns)
.order_by(event::event_date.desc())
.limit(Self::PAGE_SIZE)
.load::<EventDb>(conn)
.expect("Error filtering events")
.from_db()
}}
}
pub async fn find_by_cipher_uuid(
cipher_uuid: &str,
start: &NaiveDateTime,
end: &NaiveDateTime,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: {
event::table
.filter(event::cipher_uuid.eq(cipher_uuid))
.filter(event::event_date.between(start, end))
.order_by(event::event_date.desc())
.limit(Self::PAGE_SIZE)
.load::<EventDb>(conn)
.expect("Error filtering events")
.from_db()
}}
}
pub async fn clean_events(conn: &mut DbConn) -> EmptyResult {
if let Some(days_to_retain) = CONFIG.events_days_retain() {
let dt = Utc::now().naive_utc() - Duration::days(days_to_retain);
db_run! { conn: {
diesel::delete(event::table.filter(event::event_date.lt(dt)))
.execute(conn)
.map_res("Error cleaning old events")
}}
} else {
Ok(())
}
}
}

View File

@@ -2,8 +2,8 @@ use super::User;
db_object! {
#[derive(Identifiable, Queryable, Insertable)]
#[table_name = "favorites"]
#[primary_key(user_uuid, cipher_uuid)]
#[diesel(table_name = favorites)]
#[diesel(primary_key(user_uuid, cipher_uuid))]
pub struct Favorite {
pub user_uuid: String,
pub cipher_uuid: String,
@@ -17,7 +17,7 @@ use crate::error::MapResult;
impl Favorite {
// Returns whether the specified cipher is a favorite of the specified user.
pub async fn is_favorite(cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> bool {
pub async fn is_favorite(cipher_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> bool {
db_run! { conn: {
let query = favorites::table
.filter(favorites::cipher_uuid.eq(cipher_uuid))
@@ -29,7 +29,7 @@ impl Favorite {
}
// Sets whether the specified cipher is a favorite of the specified user.
pub async fn set_favorite(favorite: bool, cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn set_favorite(favorite: bool, cipher_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
let (old, new) = (Self::is_favorite(cipher_uuid, user_uuid, conn).await, favorite);
match (old, new) {
(false, true) => {
@@ -62,7 +62,7 @@ impl Favorite {
}
// Delete all favorite entries associated with the specified cipher.
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(favorites::table.filter(favorites::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
@@ -71,7 +71,7 @@ impl Favorite {
}
// Delete all favorite entries associated with the specified user.
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(favorites::table.filter(favorites::user_uuid.eq(user_uuid)))
.execute(conn)
@@ -81,7 +81,7 @@ impl Favorite {
/// Return a vec with (cipher_uuid) this will only contain favorite flagged ciphers
/// This is used during a full sync so we only need one query for all favorite cipher matches.
pub async fn get_all_cipher_uuid_by_user(user_uuid: &str, conn: &DbConn) -> Vec<String> {
pub async fn get_all_cipher_uuid_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<String> {
db_run! { conn: {
favorites::table
.filter(favorites::user_uuid.eq(user_uuid))

View File

@@ -5,8 +5,8 @@ use super::User;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "folders"]
#[primary_key(uuid)]
#[diesel(table_name = folders)]
#[diesel(primary_key(uuid))]
pub struct Folder {
pub uuid: String,
pub created_at: NaiveDateTime,
@@ -16,8 +16,8 @@ db_object! {
}
#[derive(Identifiable, Queryable, Insertable)]
#[table_name = "folders_ciphers"]
#[primary_key(cipher_uuid, folder_uuid)]
#[diesel(table_name = folders_ciphers)]
#[diesel(primary_key(cipher_uuid, folder_uuid))]
pub struct FolderCipher {
pub cipher_uuid: String,
pub folder_uuid: String,
@@ -67,7 +67,7 @@ use crate::error::MapResult;
/// Database methods
impl Folder {
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn).await;
self.updated_at = Utc::now().naive_utc();
@@ -102,7 +102,7 @@ impl Folder {
}
}
pub async fn delete(&self, conn: &DbConn) -> EmptyResult {
pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn).await;
FolderCipher::delete_all_by_folder(&self.uuid, conn).await?;
@@ -113,14 +113,14 @@ impl Folder {
}}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
for folder in Self::find_by_user(user_uuid, conn).await {
folder.delete(conn).await?;
}
Ok(())
}
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
folders::table
.filter(folders::uuid.eq(uuid))
@@ -130,7 +130,7 @@ impl Folder {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
folders::table
.filter(folders::user_uuid.eq(user_uuid))
@@ -142,7 +142,7 @@ impl Folder {
}
impl FolderCipher {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn:
sqlite, mysql {
// Not checking for ForeignKey Constraints here.
@@ -164,7 +164,7 @@ impl FolderCipher {
}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(
folders_ciphers::table
@@ -176,7 +176,7 @@ impl FolderCipher {
}}
}
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
@@ -184,7 +184,7 @@ impl FolderCipher {
}}
}
pub async fn delete_all_by_folder(folder_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_folder(folder_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::folder_uuid.eq(folder_uuid)))
.execute(conn)
@@ -192,7 +192,7 @@ impl FolderCipher {
}}
}
pub async fn find_by_folder_and_cipher(folder_uuid: &str, cipher_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_folder_and_cipher(folder_uuid: &str, cipher_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -203,7 +203,7 @@ impl FolderCipher {
}}
}
pub async fn find_by_folder(folder_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_folder(folder_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -215,7 +215,7 @@ impl FolderCipher {
/// Return a vec with (cipher_uuid, folder_uuid)
/// This is used during a full sync so we only need one query for all folder matches.
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<(String, String)> {
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<(String, String)> {
db_run! { conn: {
folders_ciphers::table
.inner_join(folders::table)

501
src/db/models/group.rs Normal file
View File

@@ -0,0 +1,501 @@
use chrono::{NaiveDateTime, Utc};
use serde_json::Value;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = groups)]
#[diesel(primary_key(uuid))]
pub struct Group {
pub uuid: String,
pub organizations_uuid: String,
pub name: String,
pub access_all: bool,
external_id: Option<String>,
pub creation_date: NaiveDateTime,
pub revision_date: NaiveDateTime,
}
#[derive(Identifiable, Queryable, Insertable)]
#[diesel(table_name = collections_groups)]
#[diesel(primary_key(collections_uuid, groups_uuid))]
pub struct CollectionGroup {
pub collections_uuid: String,
pub groups_uuid: String,
pub read_only: bool,
pub hide_passwords: bool,
}
#[derive(Identifiable, Queryable, Insertable)]
#[diesel(table_name = groups_users)]
#[diesel(primary_key(groups_uuid, users_organizations_uuid))]
pub struct GroupUser {
pub groups_uuid: String,
pub users_organizations_uuid: String
}
}
/// Local methods
impl Group {
pub fn new(organizations_uuid: String, name: String, access_all: bool, external_id: Option<String>) -> Self {
let now = Utc::now().naive_utc();
let mut new_model = Self {
uuid: crate::util::get_uuid(),
organizations_uuid,
name,
access_all,
external_id: None,
creation_date: now,
revision_date: now,
};
new_model.set_external_id(external_id);
new_model
}
pub fn to_json(&self) -> Value {
use crate::util::format_date;
json!({
"Id": self.uuid,
"OrganizationId": self.organizations_uuid,
"Name": self.name,
"AccessAll": self.access_all,
"ExternalId": self.external_id,
"CreationDate": format_date(&self.creation_date),
"RevisionDate": format_date(&self.revision_date)
})
}
pub fn set_external_id(&mut self, external_id: Option<String>) {
//Check if external id is empty. We don't want to have
//empty strings in the database
match external_id {
Some(external_id) => {
if external_id.is_empty() {
self.external_id = None;
} else {
self.external_id = Some(external_id)
}
}
None => self.external_id = None,
}
}
pub fn get_external_id(&self) -> Option<String> {
self.external_id.clone()
}
}
impl CollectionGroup {
pub fn new(collections_uuid: String, groups_uuid: String, read_only: bool, hide_passwords: bool) -> Self {
Self {
collections_uuid,
groups_uuid,
read_only,
hide_passwords,
}
}
}
impl GroupUser {
pub fn new(groups_uuid: String, users_organizations_uuid: String) -> Self {
Self {
groups_uuid,
users_organizations_uuid,
}
}
}
use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
use super::{User, UserOrganization};
/// Database methods
impl Group {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
self.revision_date = Utc::now().naive_utc();
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(groups::table)
.values(GroupDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(groups::table)
.filter(groups::uuid.eq(&self.uuid))
.set(GroupDb::to_db(self))
.execute(conn)
.map_res("Error saving group")
}
Err(e) => Err(e.into()),
}.map_res("Error saving group")
}
postgresql {
let value = GroupDb::to_db(self);
diesel::insert_into(groups::table)
.values(&value)
.on_conflict(groups::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving group")
}
}
}
pub async fn find_by_organization(organizations_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
groups::table
.filter(groups::organizations_uuid.eq(organizations_uuid))
.load::<GroupDb>(conn)
.expect("Error loading groups")
.from_db()
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
groups::table
.filter(groups::uuid.eq(uuid))
.first::<GroupDb>(conn)
.ok()
.from_db()
}}
}
//Returns all organizations the user has full access to
pub async fn gather_user_organizations_full_access(user_uuid: &str, conn: &mut DbConn) -> Vec<String> {
db_run! { conn: {
groups_users::table
.inner_join(users_organizations::table.on(
users_organizations::uuid.eq(groups_users::users_organizations_uuid)
))
.inner_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(groups::access_all.eq(true))
.select(groups::organizations_uuid)
.distinct()
.load::<String>(conn)
.expect("Error loading organization group full access information for user")
}}
}
pub async fn is_in_full_access_group(user_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> bool {
db_run! { conn: {
groups::table
.inner_join(groups_users::table.on(
groups_users::groups_uuid.eq(groups::uuid)
))
.inner_join(users_organizations::table.on(
users_organizations::uuid.eq(groups_users::users_organizations_uuid)
))
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(groups::organizations_uuid.eq(org_uuid))
.filter(groups::access_all.eq(true))
.select(groups::access_all)
.first::<bool>(conn)
.unwrap_or_default()
}}
}
pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult {
CollectionGroup::delete_all_by_group(&self.uuid, conn).await?;
GroupUser::delete_all_by_group(&self.uuid, conn).await?;
db_run! { conn: {
diesel::delete(groups::table.filter(groups::uuid.eq(&self.uuid)))
.execute(conn)
.map_res("Error deleting group")
}}
}
pub async fn update_revision(uuid: &str, conn: &mut DbConn) {
if let Err(e) = Self::_update_revision(uuid, &Utc::now().naive_utc(), conn).await {
warn!("Failed to update revision for {}: {:#?}", uuid, e);
}
}
async fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult {
db_run! {conn: {
crate::util::retry(|| {
diesel::update(groups::table.filter(groups::uuid.eq(uuid)))
.set(groups::revision_date.eq(date))
.execute(conn)
}, 10)
.map_res("Error updating group revision")
}}
}
}
impl CollectionGroup {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(&self.groups_uuid, conn).await;
for group_user in group_users {
group_user.update_user_revision(conn).await;
}
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(collections_groups::table)
.values((
collections_groups::collections_uuid.eq(&self.collections_uuid),
collections_groups::groups_uuid.eq(&self.groups_uuid),
collections_groups::read_only.eq(&self.read_only),
collections_groups::hide_passwords.eq(&self.hide_passwords),
))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(collections_groups::table)
.filter(collections_groups::collections_uuid.eq(&self.collections_uuid))
.filter(collections_groups::groups_uuid.eq(&self.groups_uuid))
.set((
collections_groups::collections_uuid.eq(&self.collections_uuid),
collections_groups::groups_uuid.eq(&self.groups_uuid),
collections_groups::read_only.eq(&self.read_only),
collections_groups::hide_passwords.eq(&self.hide_passwords),
))
.execute(conn)
.map_res("Error adding group to collection")
}
Err(e) => Err(e.into()),
}.map_res("Error adding group to collection")
}
postgresql {
diesel::insert_into(collections_groups::table)
.values((
collections_groups::collections_uuid.eq(&self.collections_uuid),
collections_groups::groups_uuid.eq(&self.groups_uuid),
collections_groups::read_only.eq(self.read_only),
collections_groups::hide_passwords.eq(self.hide_passwords),
))
.on_conflict((collections_groups::collections_uuid, collections_groups::groups_uuid))
.do_update()
.set((
collections_groups::read_only.eq(self.read_only),
collections_groups::hide_passwords.eq(self.hide_passwords),
))
.execute(conn)
.map_res("Error adding group to collection")
}
}
}
pub async fn find_by_group(group_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
collections_groups::table
.filter(collections_groups::groups_uuid.eq(group_uuid))
.load::<CollectionGroupDb>(conn)
.expect("Error loading collection groups")
.from_db()
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
collections_groups::table
.inner_join(groups_users::table.on(
groups_users::groups_uuid.eq(collections_groups::groups_uuid)
))
.inner_join(users_organizations::table.on(
users_organizations::uuid.eq(groups_users::users_organizations_uuid)
))
.filter(users_organizations::user_uuid.eq(user_uuid))
.select(collections_groups::all_columns)
.load::<CollectionGroupDb>(conn)
.expect("Error loading user collection groups")
.from_db()
}}
}
pub async fn find_by_collection(collection_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
collections_groups::table
.filter(collections_groups::collections_uuid.eq(collection_uuid))
.select(collections_groups::all_columns)
.load::<CollectionGroupDb>(conn)
.expect("Error loading collection groups")
.from_db()
}}
}
pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(&self.groups_uuid, conn).await;
for group_user in group_users {
group_user.update_user_revision(conn).await;
}
db_run! { conn: {
diesel::delete(collections_groups::table)
.filter(collections_groups::collections_uuid.eq(&self.collections_uuid))
.filter(collections_groups::groups_uuid.eq(&self.groups_uuid))
.execute(conn)
.map_res("Error deleting collection group")
}}
}
pub async fn delete_all_by_group(group_uuid: &str, conn: &mut DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(group_uuid, conn).await;
for group_user in group_users {
group_user.update_user_revision(conn).await;
}
db_run! { conn: {
diesel::delete(collections_groups::table)
.filter(collections_groups::groups_uuid.eq(group_uuid))
.execute(conn)
.map_res("Error deleting collection group")
}}
}
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &mut DbConn) -> EmptyResult {
let collection_assigned_to_groups = CollectionGroup::find_by_collection(collection_uuid, conn).await;
for collection_assigned_to_group in collection_assigned_to_groups {
let group_users = GroupUser::find_by_group(&collection_assigned_to_group.groups_uuid, conn).await;
for group_user in group_users {
group_user.update_user_revision(conn).await;
}
}
db_run! { conn: {
diesel::delete(collections_groups::table)
.filter(collections_groups::collections_uuid.eq(collection_uuid))
.execute(conn)
.map_res("Error deleting collection group")
}}
}
}
impl GroupUser {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
self.update_user_revision(conn).await;
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(groups_users::table)
.values((
groups_users::users_organizations_uuid.eq(&self.users_organizations_uuid),
groups_users::groups_uuid.eq(&self.groups_uuid),
))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(groups_users::table)
.filter(groups_users::users_organizations_uuid.eq(&self.users_organizations_uuid))
.filter(groups_users::groups_uuid.eq(&self.groups_uuid))
.set((
groups_users::users_organizations_uuid.eq(&self.users_organizations_uuid),
groups_users::groups_uuid.eq(&self.groups_uuid),
))
.execute(conn)
.map_res("Error adding user to group")
}
Err(e) => Err(e.into()),
}.map_res("Error adding user to group")
}
postgresql {
diesel::insert_into(groups_users::table)
.values((
groups_users::users_organizations_uuid.eq(&self.users_organizations_uuid),
groups_users::groups_uuid.eq(&self.groups_uuid),
))
.on_conflict((groups_users::users_organizations_uuid, groups_users::groups_uuid))
.do_update()
.set((
groups_users::users_organizations_uuid.eq(&self.users_organizations_uuid),
groups_users::groups_uuid.eq(&self.groups_uuid),
))
.execute(conn)
.map_res("Error adding user to group")
}
}
}
pub async fn find_by_group(group_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
groups_users::table
.filter(groups_users::groups_uuid.eq(group_uuid))
.load::<GroupUserDb>(conn)
.expect("Error loading group users")
.from_db()
}}
}
pub async fn find_by_user(users_organizations_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
groups_users::table
.filter(groups_users::users_organizations_uuid.eq(users_organizations_uuid))
.load::<GroupUserDb>(conn)
.expect("Error loading groups for user")
.from_db()
}}
}
pub async fn update_user_revision(&self, conn: &mut DbConn) {
match UserOrganization::find_by_uuid(&self.users_organizations_uuid, conn).await {
Some(user) => User::update_uuid_revision(&user.user_uuid, conn).await,
None => warn!("User could not be found!"),
}
}
pub async fn delete_by_group_id_and_user_id(
group_uuid: &str,
users_organizations_uuid: &str,
conn: &mut DbConn,
) -> EmptyResult {
match UserOrganization::find_by_uuid(users_organizations_uuid, conn).await {
Some(user) => User::update_uuid_revision(&user.user_uuid, conn).await,
None => warn!("User could not be found!"),
};
db_run! { conn: {
diesel::delete(groups_users::table)
.filter(groups_users::groups_uuid.eq(group_uuid))
.filter(groups_users::users_organizations_uuid.eq(users_organizations_uuid))
.execute(conn)
.map_res("Error deleting group users")
}}
}
pub async fn delete_all_by_group(group_uuid: &str, conn: &mut DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(group_uuid, conn).await;
for group_user in group_users {
group_user.update_user_revision(conn).await;
}
db_run! { conn: {
diesel::delete(groups_users::table)
.filter(groups_users::groups_uuid.eq(group_uuid))
.execute(conn)
.map_res("Error deleting group users")
}}
}
pub async fn delete_all_by_user(users_organizations_uuid: &str, conn: &mut DbConn) -> EmptyResult {
match UserOrganization::find_by_uuid(users_organizations_uuid, conn).await {
Some(user) => User::update_uuid_revision(&user.user_uuid, conn).await,
None => warn!("User could not be found!"),
}
db_run! { conn: {
diesel::delete(groups_users::table)
.filter(groups_users::users_organizations_uuid.eq(users_organizations_uuid))
.execute(conn)
.map_res("Error deleting user groups")
}}
}
}

View File

@@ -3,8 +3,10 @@ mod cipher;
mod collection;
mod device;
mod emergency_access;
mod event;
mod favorite;
mod folder;
mod group;
mod org_policy;
mod organization;
mod send;
@@ -17,9 +19,11 @@ pub use self::cipher::Cipher;
pub use self::collection::{Collection, CollectionCipher, CollectionUser};
pub use self::device::Device;
pub use self::emergency_access::{EmergencyAccess, EmergencyAccessStatus, EmergencyAccessType};
pub use self::event::{Event, EventType};
pub use self::favorite::Favorite;
pub use self::folder::{Folder, FolderCipher};
pub use self::org_policy::{OrgPolicy, OrgPolicyType};
pub use self::group::{CollectionGroup, Group, GroupUser};
pub use self::org_policy::{OrgPolicy, OrgPolicyErr, OrgPolicyType};
pub use self::organization::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
pub use self::send::{Send, SendType};
pub use self::two_factor::{TwoFactor, TwoFactorType};

View File

@@ -6,12 +6,12 @@ use crate::db::DbConn;
use crate::error::MapResult;
use crate::util::UpCase;
use super::{UserOrgStatus, UserOrgType, UserOrganization};
use super::{TwoFactor, UserOrgStatus, UserOrgType, UserOrganization};
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "org_policies"]
#[primary_key(uuid)]
#[diesel(table_name = org_policies)]
#[diesel(primary_key(uuid))]
pub struct OrgPolicy {
pub uuid: String,
pub org_uuid: String,
@@ -21,25 +21,37 @@ db_object! {
}
}
// https://github.com/bitwarden/server/blob/b86a04cef9f1e1b82cf18e49fc94e017c641130c/src/Core/Enums/PolicyType.cs
#[derive(Copy, Clone, Eq, PartialEq, num_derive::FromPrimitive)]
pub enum OrgPolicyType {
TwoFactorAuthentication = 0,
MasterPassword = 1,
PasswordGenerator = 2,
SingleOrg = 3,
// RequireSso = 4, // Not currently supported.
// RequireSso = 4, // Not supported
PersonalOwnership = 5,
DisableSend = 6,
SendOptions = 7,
// ResetPassword = 8, // Not supported
// MaximumVaultTimeout = 9, // Not supported (Not AGPLv3 Licensed)
// DisablePersonalVaultExport = 10, // Not supported (Not AGPLv3 Licensed)
}
// https://github.com/bitwarden/server/blob/master/src/Core/Models/Data/SendOptionsPolicyData.cs
// https://github.com/bitwarden/server/blob/5cbdee137921a19b1f722920f0fa3cd45af2ef0f/src/Core/Models/Data/Organizations/Policies/SendOptionsPolicyData.cs
#[derive(Deserialize)]
#[allow(non_snake_case)]
pub struct SendOptionsPolicyData {
pub DisableHideEmail: bool,
}
pub type OrgPolicyResult = Result<(), OrgPolicyErr>;
#[derive(Debug)]
pub enum OrgPolicyErr {
TwoFactorMissing,
SingleOrgEnforced,
}
/// Local methods
impl OrgPolicy {
pub fn new(org_uuid: String, atype: OrgPolicyType, data: String) -> Self {
@@ -71,7 +83,7 @@ impl OrgPolicy {
/// Database methods
impl OrgPolicy {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(org_policies::table)
@@ -114,7 +126,7 @@ impl OrgPolicy {
}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(org_policies::table.filter(org_policies::uuid.eq(self.uuid)))
.execute(conn)
@@ -122,7 +134,7 @@ impl OrgPolicy {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
org_policies::table
.filter(org_policies::uuid.eq(uuid))
@@ -132,7 +144,7 @@ impl OrgPolicy {
}}
}
pub async fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
@@ -142,7 +154,7 @@ impl OrgPolicy {
}}
}
pub async fn find_confirmed_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_confirmed_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
org_policies::table
.inner_join(
@@ -160,18 +172,18 @@ impl OrgPolicy {
}}
}
pub async fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Option<Self> {
pub async fn find_by_org_and_type(org_uuid: &str, policy_type: OrgPolicyType, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
.filter(org_policies::atype.eq(atype))
.filter(org_policies::atype.eq(policy_type as i32))
.first::<OrgPolicyDb>(conn)
.ok()
.from_db()
}}
}
pub async fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(org_policies::table.filter(org_policies::org_uuid.eq(org_uuid)))
.execute(conn)
@@ -179,40 +191,128 @@ impl OrgPolicy {
}}
}
pub async fn find_accepted_and_confirmed_by_user_and_active_policy(
user_uuid: &str,
policy_type: OrgPolicyType,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: {
org_policies::table
.inner_join(
users_organizations::table.on(
users_organizations::org_uuid.eq(org_policies::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid)))
)
.filter(
users_organizations::status.eq(UserOrgStatus::Accepted as i32)
)
.or_filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
.filter(org_policies::atype.eq(policy_type as i32))
.filter(org_policies::enabled.eq(true))
.select(org_policies::all_columns)
.load::<OrgPolicyDb>(conn)
.expect("Error loading org_policy")
.from_db()
}}
}
pub async fn find_confirmed_by_user_and_active_policy(
user_uuid: &str,
policy_type: OrgPolicyType,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: {
org_policies::table
.inner_join(
users_organizations::table.on(
users_organizations::org_uuid.eq(org_policies::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid)))
)
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
.filter(org_policies::atype.eq(policy_type as i32))
.filter(org_policies::enabled.eq(true))
.select(org_policies::all_columns)
.load::<OrgPolicyDb>(conn)
.expect("Error loading org_policy")
.from_db()
}}
}
/// Returns true if the user belongs to an org that has enabled the specified policy type,
/// and the user is not an owner or admin of that org. This is only useful for checking
/// applicability of policy types that have these particular semantics.
pub async fn is_applicable_to_user(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> bool {
// TODO: Should check confirmed and accepted users
for policy in OrgPolicy::find_confirmed_by_user(user_uuid, conn).await {
if policy.enabled && policy.has_type(policy_type) {
let org_uuid = &policy.org_uuid;
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn).await {
if user.atype < UserOrgType::Admin {
return true;
}
pub async fn is_applicable_to_user(
user_uuid: &str,
policy_type: OrgPolicyType,
exclude_org_uuid: Option<&str>,
conn: &mut DbConn,
) -> bool {
for policy in
OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy(user_uuid, policy_type, conn).await
{
// Check if we need to skip this organization.
if exclude_org_uuid.is_some() && exclude_org_uuid.unwrap() == policy.org_uuid {
continue;
}
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await {
if user.atype < UserOrgType::Admin {
return true;
}
}
}
false
}
pub async fn is_user_allowed(
user_uuid: &str,
org_uuid: &str,
exclude_current_org: bool,
conn: &mut DbConn,
) -> OrgPolicyResult {
// Enforce TwoFactor/TwoStep login
if TwoFactor::find_by_user(user_uuid, conn).await.is_empty() {
match Self::find_by_org_and_type(org_uuid, OrgPolicyType::TwoFactorAuthentication, conn).await {
Some(p) if p.enabled => {
return Err(OrgPolicyErr::TwoFactorMissing);
}
_ => {}
};
}
// Enforce Single Organization Policy of other organizations user is a member of
// This check here needs to exclude this current org-id, else an accepted user can not be confirmed.
let exclude_org = if exclude_current_org {
Some(org_uuid)
} else {
None
};
if Self::is_applicable_to_user(user_uuid, OrgPolicyType::SingleOrg, exclude_org, conn).await {
return Err(OrgPolicyErr::SingleOrgEnforced);
}
Ok(())
}
/// Returns true if the user belongs to an org that has enabled the `DisableHideEmail`
/// option of the `Send Options` policy, and the user is not an owner or admin of that org.
pub async fn is_hide_email_disabled(user_uuid: &str, conn: &DbConn) -> bool {
for policy in OrgPolicy::find_confirmed_by_user(user_uuid, conn).await {
if policy.enabled && policy.has_type(OrgPolicyType::SendOptions) {
let org_uuid = &policy.org_uuid;
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn).await {
if user.atype < UserOrgType::Admin {
match serde_json::from_str::<UpCase<SendOptionsPolicyData>>(&policy.data) {
Ok(opts) => {
if opts.data.DisableHideEmail {
return true;
}
pub async fn is_hide_email_disabled(user_uuid: &str, conn: &mut DbConn) -> bool {
for policy in
OrgPolicy::find_confirmed_by_user_and_active_policy(user_uuid, OrgPolicyType::SendOptions, conn).await
{
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await {
if user.atype < UserOrgType::Admin {
match serde_json::from_str::<UpCase<SendOptionsPolicyData>>(&policy.data) {
Ok(opts) => {
if opts.data.DisableHideEmail {
return true;
}
_ => error!("Failed to deserialize policy data: {}", policy.data),
}
_ => error!("Failed to deserialize SendOptionsPolicyData: {}", policy.data),
}
}
}

View File

@@ -2,12 +2,13 @@ use num_traits::FromPrimitive;
use serde_json::Value;
use std::cmp::Ordering;
use super::{CollectionUser, OrgPolicy, OrgPolicyType, User};
use super::{CollectionUser, GroupUser, OrgPolicy, OrgPolicyType, User};
use crate::CONFIG;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "organizations"]
#[primary_key(uuid)]
#[diesel(table_name = organizations)]
#[diesel(primary_key(uuid))]
pub struct Organization {
pub uuid: String,
pub name: String,
@@ -17,8 +18,8 @@ db_object! {
}
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "users_organizations"]
#[primary_key(uuid)]
#[diesel(table_name = users_organizations)]
#[diesel(primary_key(uuid))]
pub struct UserOrganization {
pub uuid: String,
pub user_uuid: String,
@@ -31,7 +32,9 @@ db_object! {
}
}
// https://github.com/bitwarden/server/blob/b86a04cef9f1e1b82cf18e49fc94e017c641130c/src/Core/Enums/OrganizationUserStatusType.cs
pub enum UserOrgStatus {
Revoked = -1,
Invited = 0,
Accepted = 1,
Confirmed = 2,
@@ -133,26 +136,29 @@ impl Organization {
public_key,
}
}
// https://github.com/bitwarden/server/blob/13d1e74d6960cf0d042620b72d85bf583a4236f7/src/Api/Models/Response/Organizations/OrganizationResponseModel.cs
pub fn to_json(&self) -> Value {
json!({
"Id": self.uuid,
"Identifier": null, // not supported by us
"Name": self.name,
"Seats": 10, // The value doesn't matter, we don't check server-side
// "MaxAutoscaleSeats": null, // The value doesn't matter, we don't check server-side
"MaxCollections": 10, // The value doesn't matter, we don't check server-side
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
"Use2fa": true,
"UseDirectory": false, // Is supported, but this value isn't checked anywhere (yet)
"UseEvents": false, // not supported by us
"UseGroups": false, // not supported by us
"UseEvents": CONFIG.org_events_enabled(),
"UseGroups": CONFIG.org_groups_enabled(),
"UseTotp": true,
"UsePolicies": true,
"UseSso": false, // We do not support SSO
// "UseScim": false, // Not supported (Not AGPLv3 Licensed)
"UseSso": false, // Not supported
// "UseKeyConnector": false, // Not supported
"SelfHost": true,
"UseApi": false, // not supported by us
"UseApi": false, // Not supported
"HasPublicAndPrivateKeys": self.private_key.is_some() && self.public_key.is_some(),
"ResetPasswordEnrolled": false, // not supported by us
"UseResetPassword": false, // Not supported
"BusinessName": null,
"BusinessAddress1": null,
@@ -170,6 +176,12 @@ impl Organization {
}
}
// Used to either subtract or add to the current status
// The number 128 should be fine, it is well within the range of an i32
// The same goes for the database where we only use INTEGER (the same as an i32)
// It should also provide enough room for 100+ types, which i doubt will ever happen.
static ACTIVATE_REVOKE_DIFF: i32 = 128;
impl UserOrganization {
pub fn new(user_uuid: String, org_uuid: String) -> Self {
Self {
@@ -184,6 +196,18 @@ impl UserOrganization {
atype: UserOrgType::User as i32,
}
}
pub fn restore(&mut self) {
if self.status < UserOrgStatus::Accepted as i32 {
self.status += ACTIVATE_REVOKE_DIFF;
}
}
pub fn revoke(&mut self) {
if self.status > UserOrgStatus::Revoked as i32 {
self.status -= ACTIVATE_REVOKE_DIFF;
}
}
}
use crate::db::DbConn;
@@ -193,7 +217,11 @@ use crate::error::MapResult;
/// Database methods
impl Organization {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
if !email_address::EmailAddress::is_valid(self.billing_email.trim()) {
err!(format!("BillingEmail {} is not a valid email address", self.billing_email.trim()))
}
for user_org in UserOrganization::find_by_org(&self.uuid, conn).await.iter() {
User::update_uuid_revision(&user_org.user_uuid, conn).await;
}
@@ -230,7 +258,7 @@ impl Organization {
}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
use super::{Cipher, Collection};
Cipher::delete_all_by_organization(&self.uuid, conn).await?;
@@ -245,7 +273,7 @@ impl Organization {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
organizations::table
.filter(organizations::uuid.eq(uuid))
@@ -254,7 +282,7 @@ impl Organization {
}}
}
pub async fn get_all(conn: &DbConn) -> Vec<Self> {
pub async fn get_all(conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
organizations::table.load::<OrganizationDb>(conn).expect("Error loading organizations").from_db()
}}
@@ -262,57 +290,61 @@ impl Organization {
}
impl UserOrganization {
pub async fn to_json(&self, conn: &DbConn) -> Value {
pub async fn to_json(&self, conn: &mut DbConn) -> Value {
let org = Organization::find_by_uuid(&self.org_uuid, conn).await.unwrap();
// https://github.com/bitwarden/server/blob/13d1e74d6960cf0d042620b72d85bf583a4236f7/src/Api/Models/Response/ProfileOrganizationResponseModel.cs
json!({
"Id": self.org_uuid,
"Identifier": null, // not supported by us
"Identifier": null, // Not supported
"Name": org.name,
"Seats": 10, // The value doesn't matter, we don't check server-side
"MaxCollections": 10, // The value doesn't matter, we don't check server-side
"UsersGetPremium": true,
"Use2fa": true,
"UseDirectory": false, // Is supported, but this value isn't checked anywhere (yet)
"UseEvents": false, // not supported by us
"UseGroups": false, // not supported by us
"UseEvents": CONFIG.org_events_enabled(),
"UseGroups": CONFIG.org_groups_enabled(),
"UseTotp": true,
// "UseScim": false, // Not supported (Not AGPLv3 Licensed)
"UsePolicies": true,
"UseApi": false, // not supported by us
"UseApi": false, // Not supported
"SelfHost": true,
"HasPublicAndPrivateKeys": org.private_key.is_some() && org.public_key.is_some(),
"ResetPasswordEnrolled": false, // not supported by us
"SsoBound": false, // We do not support SSO
"UseSso": false, // We do not support SSO
// TODO: Add support for Business Portal
// Upstream is moving Policies and SSO management outside of the web-vault to /portal
// For now they still have that code also in the web-vault, but they will remove it at some point.
// https://github.com/bitwarden/server/tree/master/bitwarden_license/src/
"UseBusinessPortal": false, // Disable BusinessPortal Button
"ResetPasswordEnrolled": false, // Not supported
"SsoBound": false, // Not supported
"UseSso": false, // Not supported
"ProviderId": null,
"ProviderName": null,
// "KeyConnectorEnabled": false,
// "KeyConnectorUrl": null,
// TODO: Add support for Custom User Roles
// See: https://bitwarden.com/help/article/user-types-access-control/#custom-role
// "Permissions": {
// "AccessBusinessPortal": false,
// "AccessEventLogs": false,
// "AccessEventLogs": false, // Not supported
// "AccessImportExport": false,
// "AccessReports": false,
// "ManageAllCollections": false,
// "CreateNewCollections": false,
// "EditAnyCollection": false,
// "DeleteAnyCollection": false,
// "ManageAssignedCollections": false,
// "editAssignedCollections": false,
// "deleteAssignedCollections": false,
// "ManageCiphers": false,
// "ManageGroups": false,
// "ManageGroups": false, // Not supported
// "ManagePolicies": false,
// "ManageResetPassword": false,
// "ManageSso": false,
// "ManageResetPassword": false, // Not supported
// "ManageSso": false, // Not supported
// "ManageUsers": false,
// "ManageScim": false, // Not supported (Not AGPLv3 Licensed)
// },
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
// These are per user
"UserId": self.user_uuid,
"Key": self.akey,
"Status": self.status,
"Type": self.atype,
@@ -322,16 +354,24 @@ impl UserOrganization {
})
}
pub async fn to_json_user_details(&self, conn: &DbConn) -> Value {
pub async fn to_json_user_details(&self, conn: &mut DbConn) -> Value {
let user = User::find_by_uuid(&self.user_uuid, conn).await.unwrap();
// Because BitWarden want the status to be -1 for revoked users we need to catch that here.
// We subtract/add a number so we can restore/activate the user to it's previouse state again.
let status = if self.status < UserOrgStatus::Revoked as i32 {
UserOrgStatus::Revoked as i32
} else {
self.status
};
json!({
"Id": self.uuid,
"UserId": self.user_uuid,
"Name": user.name,
"Email": user.email,
"Status": self.status,
"Status": status,
"Type": self.atype,
"AccessAll": self.access_all,
@@ -347,7 +387,7 @@ impl UserOrganization {
})
}
pub async fn to_json_details(&self, conn: &DbConn) -> Value {
pub async fn to_json_details(&self, conn: &mut DbConn) -> Value {
let coll_uuids = if self.access_all {
vec![] // If we have complete access, no need to fill the array
} else {
@@ -365,11 +405,19 @@ impl UserOrganization {
.collect()
};
// Because BitWarden want the status to be -1 for revoked users we need to catch that here.
// We subtract/add a number so we can restore/activate the user to it's previouse state again.
let status = if self.status < UserOrgStatus::Revoked as i32 {
UserOrgStatus::Revoked as i32
} else {
self.status
};
json!({
"Id": self.uuid,
"UserId": self.user_uuid,
"Status": self.status,
"Status": status,
"Type": self.atype,
"AccessAll": self.access_all,
"Collections": coll_uuids,
@@ -377,7 +425,7 @@ impl UserOrganization {
"Object": "organizationUserDetails",
})
}
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn).await;
db_run! { conn:
@@ -411,10 +459,11 @@ impl UserOrganization {
}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn).await;
CollectionUser::delete_all_by_user_and_org(&self.user_uuid, &self.org_uuid, conn).await?;
GroupUser::delete_all_by_user(&self.uuid, conn).await?;
db_run! { conn: {
diesel::delete(users_organizations::table.filter(users_organizations::uuid.eq(self.uuid)))
@@ -423,21 +472,21 @@ impl UserOrganization {
}}
}
pub async fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
for user_org in Self::find_by_org(org_uuid, conn).await {
user_org.delete(conn).await?;
}
Ok(())
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
for user_org in Self::find_any_state_by_user(user_uuid, conn).await {
user_org.delete(conn).await?;
}
Ok(())
}
pub async fn find_by_email_and_org(email: &str, org_id: &str, conn: &DbConn) -> Option<UserOrganization> {
pub async fn find_by_email_and_org(email: &str, org_id: &str, conn: &mut DbConn) -> Option<UserOrganization> {
if let Some(user) = super::User::find_by_mail(email, conn).await {
if let Some(user_org) = UserOrganization::find_by_user_and_org(&user.uuid, org_id, conn).await {
return Some(user_org);
@@ -459,7 +508,7 @@ impl UserOrganization {
(self.access_all || self.atype >= UserOrgType::Admin) && self.has_status(UserOrgStatus::Confirmed)
}
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::uuid.eq(uuid))
@@ -468,7 +517,7 @@ impl UserOrganization {
}}
}
pub async fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::uuid.eq(uuid))
@@ -478,7 +527,7 @@ impl UserOrganization {
}}
}
pub async fn find_confirmed_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_confirmed_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
@@ -488,7 +537,7 @@ impl UserOrganization {
}}
}
pub async fn find_invited_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_invited_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
@@ -498,7 +547,7 @@ impl UserOrganization {
}}
}
pub async fn find_any_state_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_any_state_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
@@ -507,7 +556,19 @@ impl UserOrganization {
}}
}
pub async fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn count_accepted_and_confirmed_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(users_organizations::status.eq(UserOrgStatus::Accepted as i32))
.or_filter(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.count()
.first::<i64>(conn)
.unwrap_or(0)
}}
}
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
@@ -516,7 +577,7 @@ impl UserOrganization {
}}
}
pub async fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
@@ -527,17 +588,29 @@ impl UserOrganization {
}}
}
pub async fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_org_and_type(org_uuid: &str, atype: UserOrgType, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.filter(users_organizations::atype.eq(atype))
.filter(users_organizations::atype.eq(atype as i32))
.load::<UserOrganizationDb>(conn)
.expect("Error loading user organizations").from_db()
}}
}
pub async fn find_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn count_confirmed_by_org_and_type(org_uuid: &str, atype: UserOrgType, conn: &mut DbConn) -> i64 {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.filter(users_organizations::atype.eq(atype as i32))
.filter(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.count()
.first::<i64>(conn)
.unwrap_or(0)
}}
}
pub async fn find_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
@@ -547,7 +620,7 @@ impl UserOrganization {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
@@ -556,7 +629,17 @@ impl UserOrganization {
}}
}
pub async fn find_by_user_and_policy(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> Vec<Self> {
pub async fn get_org_uuid_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<String> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.select(users_organizations::org_uuid)
.load::<String>(conn)
.unwrap_or_default()
}}
}
pub async fn find_by_user_and_policy(user_uuid: &str, policy_type: OrgPolicyType, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.inner_join(
@@ -575,7 +658,7 @@ impl UserOrganization {
}}
}
pub async fn find_by_cipher_and_org(cipher_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_cipher_and_org(cipher_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
@@ -597,7 +680,19 @@ impl UserOrganization {
}}
}
pub async fn find_by_collection_and_org(collection_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn user_has_ge_admin_access_to_cipher(user_uuid: &str, cipher_uuid: &str, conn: &mut DbConn) -> bool {
db_run! { conn: {
users_organizations::table
.inner_join(ciphers::table.on(ciphers::uuid.eq(cipher_uuid).and(ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable()))))
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(users_organizations::atype.eq_any(vec![UserOrgType::Owner as i32, UserOrgType::Admin as i32]))
.count()
.first::<i64>(conn)
.ok().unwrap_or(0) != 0
}}
}
pub async fn find_by_collection_and_org(collection_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))

View File

@@ -5,9 +5,9 @@ use super::User;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "sends"]
#[changeset_options(treat_none_as_null="true")]
#[primary_key(uuid)]
#[diesel(table_name = sends)]
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))]
pub struct Send {
pub uuid: String,
@@ -81,7 +81,7 @@ impl Send {
if let Some(password) = password {
self.password_iter = Some(PASSWORD_ITER);
let salt = crate::crypto::get_random_64();
let salt = crate::crypto::get_random_bytes::<64>().to_vec();
let hash = crate::crypto::hash_password(password.as_bytes(), &salt, PASSWORD_ITER as u32);
self.password_salt = Some(salt);
self.password_hash = Some(hash);
@@ -101,7 +101,7 @@ impl Send {
}
}
pub async fn creator_identifier(&self, conn: &DbConn) -> Option<String> {
pub async fn creator_identifier(&self, conn: &mut DbConn) -> Option<String> {
if let Some(hide_email) = self.hide_email {
if hide_email {
return None;
@@ -148,7 +148,7 @@ impl Send {
})
}
pub async fn to_json_access(&self, conn: &DbConn) -> Value {
pub async fn to_json_access(&self, conn: &mut DbConn) -> Value {
use crate::util::format_date;
let data: Value = serde_json::from_str(&self.data).unwrap_or_default();
@@ -174,7 +174,7 @@ use crate::api::EmptyResult;
use crate::error::MapResult;
impl Send {
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
self.revision_date = Utc::now().naive_utc();
@@ -209,7 +209,7 @@ impl Send {
}
}
pub async fn delete(&self, conn: &DbConn) -> EmptyResult {
pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
if self.atype == SendType::File as i32 {
@@ -224,13 +224,13 @@ impl Send {
}
/// Purge all sends that are past their deletion date.
pub async fn purge(conn: &DbConn) {
pub async fn purge(conn: &mut DbConn) {
for send in Self::find_by_past_deletion_date(conn).await {
send.delete(conn).await.ok();
}
}
pub async fn update_users_revision(&self, conn: &DbConn) -> Vec<String> {
pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<String> {
let mut user_uuids = Vec::new();
match &self.user_uuid {
Some(user_uuid) => {
@@ -244,14 +244,14 @@ impl Send {
user_uuids
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
for send in Self::find_by_user(user_uuid, conn).await {
send.delete(conn).await?;
}
Ok(())
}
pub async fn find_by_access_id(access_id: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_access_id(access_id: &str, conn: &mut DbConn) -> Option<Self> {
use data_encoding::BASE64URL_NOPAD;
use uuid::Uuid;
@@ -268,7 +268,7 @@ impl Send {
Self::find_by_uuid(&uuid, conn).await
}
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
sends::table
.filter(sends::uuid.eq(uuid))
@@ -278,7 +278,7 @@ impl Send {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
sends::table
.filter(sends::user_uuid.eq(user_uuid))
@@ -286,7 +286,7 @@ impl Send {
}}
}
pub async fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
sends::table
.filter(sends::organization_uuid.eq(org_uuid))
@@ -294,7 +294,7 @@ impl Send {
}}
}
pub async fn find_by_past_deletion_date(conn: &DbConn) -> Vec<Self> {
pub async fn find_by_past_deletion_date(conn: &mut DbConn) -> Vec<Self> {
let now = Utc::now().naive_utc();
db_run! {conn: {
sends::table

View File

@@ -4,8 +4,8 @@ use crate::{api::EmptyResult, db::DbConn, error::MapResult};
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "twofactor"]
#[primary_key(uuid)]
#[diesel(table_name = twofactor)]
#[diesel(primary_key(uuid))]
pub struct TwoFactor {
pub uuid: String,
pub user_uuid: String,
@@ -68,7 +68,7 @@ impl TwoFactor {
/// Database methods
impl TwoFactor {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(twofactor::table)
@@ -107,7 +107,7 @@ impl TwoFactor {
}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::uuid.eq(self.uuid)))
.execute(conn)
@@ -115,7 +115,7 @@ impl TwoFactor {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid))
@@ -126,7 +126,7 @@ impl TwoFactor {
}}
}
pub async fn find_by_user_and_type(user_uuid: &str, atype: i32, conn: &DbConn) -> Option<Self> {
pub async fn find_by_user_and_type(user_uuid: &str, atype: i32, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid))
@@ -137,7 +137,7 @@ impl TwoFactor {
}}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))
.execute(conn)
@@ -145,7 +145,7 @@ impl TwoFactor {
}}
}
pub async fn migrate_u2f_to_webauthn(conn: &DbConn) -> EmptyResult {
pub async fn migrate_u2f_to_webauthn(conn: &mut DbConn) -> EmptyResult {
let u2f_factors = db_run! { conn: {
twofactor::table
.filter(twofactor::atype.eq(TwoFactorType::U2f as i32))

View File

@@ -4,8 +4,8 @@ use crate::{api::EmptyResult, auth::ClientIp, db::DbConn, error::MapResult, CONF
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "twofactor_incomplete"]
#[primary_key(user_uuid, device_uuid)]
#[diesel(table_name = twofactor_incomplete)]
#[diesel(primary_key(user_uuid, device_uuid))]
pub struct TwoFactorIncomplete {
pub user_uuid: String,
// This device UUID is simply what's claimed by the device. It doesn't
@@ -24,7 +24,7 @@ impl TwoFactorIncomplete {
device_uuid: &str,
device_name: &str,
ip: &ClientIp,
conn: &DbConn,
conn: &mut DbConn,
) -> EmptyResult {
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
return Ok(());
@@ -52,7 +52,7 @@ impl TwoFactorIncomplete {
}}
}
pub async fn mark_complete(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn mark_complete(user_uuid: &str, device_uuid: &str, conn: &mut DbConn) -> EmptyResult {
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
return Ok(());
}
@@ -60,7 +60,7 @@ impl TwoFactorIncomplete {
Self::delete_by_user_and_device(user_uuid, device_uuid, conn).await
}
pub async fn find_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
twofactor_incomplete::table
.filter(twofactor_incomplete::user_uuid.eq(user_uuid))
@@ -71,7 +71,7 @@ impl TwoFactorIncomplete {
}}
}
pub async fn find_logins_before(dt: &NaiveDateTime, conn: &DbConn) -> Vec<Self> {
pub async fn find_logins_before(dt: &NaiveDateTime, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
twofactor_incomplete::table
.filter(twofactor_incomplete::login_time.lt(dt))
@@ -81,11 +81,11 @@ impl TwoFactorIncomplete {
}}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
Self::delete_by_user_and_device(&self.user_uuid, &self.device_uuid, conn).await
}
pub async fn delete_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor_incomplete::table
.filter(twofactor_incomplete::user_uuid.eq(user_uuid))
@@ -95,7 +95,7 @@ impl TwoFactorIncomplete {
}}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor_incomplete::table.filter(twofactor_incomplete::user_uuid.eq(user_uuid)))
.execute(conn)

View File

@@ -6,9 +6,9 @@ use crate::CONFIG;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "users"]
#[changeset_options(treat_none_as_null="true")]
#[primary_key(uuid)]
#[diesel(table_name = users)]
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))]
pub struct User {
pub uuid: String,
pub enabled: bool,
@@ -32,7 +32,7 @@ db_object! {
pub private_key: Option<String>,
pub public_key: Option<String>,
#[column_name = "totp_secret"] // Note, this is only added to the UserDb structs, not to User
#[diesel(column_name = "totp_secret")] // Note, this is only added to the UserDb structs, not to User
_totp_secret: Option<String>,
pub totp_recover: Option<String>,
@@ -49,8 +49,8 @@ db_object! {
}
#[derive(Identifiable, Queryable, Insertable)]
#[table_name = "invitations"]
#[primary_key(email)]
#[diesel(table_name = invitations)]
#[diesel(primary_key(email))]
pub struct Invitation {
pub email: String,
}
@@ -93,7 +93,7 @@ impl User {
email_new_token: None,
password_hash: Vec::new(),
salt: crypto::get_random_64(),
salt: crypto::get_random_bytes::<64>().to_vec(),
password_iterations: CONFIG.password_iterations(),
security_stamp: crate::util::get_uuid(),
@@ -192,18 +192,13 @@ use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
use futures::{stream, stream::StreamExt};
/// Database methods
impl User {
pub async fn to_json(&self, conn: &DbConn) -> Value {
let orgs_json = stream::iter(UserOrganization::find_confirmed_by_user(&self.uuid, conn).await)
.then(|c| async {
let c = c; // Move out this single variable
c.to_json(conn).await
})
.collect::<Vec<Value>>()
.await;
pub async fn to_json(&self, conn: &mut DbConn) -> Value {
let mut orgs_json = Vec::new();
for c in UserOrganization::find_confirmed_by_user(&self.uuid, conn).await {
orgs_json.push(c.to_json(conn).await);
}
let twofactor_enabled = !TwoFactor::find_by_user(&self.uuid, conn).await.is_empty();
@@ -235,7 +230,7 @@ impl User {
})
}
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
if self.email.trim().is_empty() {
err!("User email can't be empty")
}
@@ -273,13 +268,13 @@ impl User {
}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
for user_org in UserOrganization::find_confirmed_by_user(&self.uuid, conn).await {
if user_org.atype == UserOrgType::Owner {
let owner_type = UserOrgType::Owner as i32;
if UserOrganization::find_by_org_and_type(&user_org.org_uuid, owner_type, conn).await.len() <= 1 {
err!("Can't delete last owner")
}
if user_org.atype == UserOrgType::Owner
&& UserOrganization::count_confirmed_by_org_and_type(&user_org.org_uuid, UserOrgType::Owner, conn).await
<= 1
{
err!("Can't delete last owner")
}
}
@@ -301,13 +296,13 @@ impl User {
}}
}
pub async fn update_uuid_revision(uuid: &str, conn: &DbConn) {
pub async fn update_uuid_revision(uuid: &str, conn: &mut DbConn) {
if let Err(e) = Self::_update_revision(uuid, &Utc::now().naive_utc(), conn).await {
warn!("Failed to update revision for {}: {:#?}", uuid, e);
}
}
pub async fn update_all_revisions(conn: &DbConn) -> EmptyResult {
pub async fn update_all_revisions(conn: &mut DbConn) -> EmptyResult {
let updated_at = Utc::now().naive_utc();
db_run! {conn: {
@@ -320,13 +315,13 @@ impl User {
}}
}
pub async fn update_revision(&mut self, conn: &DbConn) -> EmptyResult {
pub async fn update_revision(&mut self, conn: &mut DbConn) -> EmptyResult {
self.updated_at = Utc::now().naive_utc();
Self::_update_revision(&self.uuid, &self.updated_at, conn).await
}
async fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &DbConn) -> EmptyResult {
async fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult {
db_run! {conn: {
crate::util::retry(|| {
diesel::update(users::table.filter(users::uuid.eq(uuid)))
@@ -337,7 +332,7 @@ impl User {
}}
}
pub async fn find_by_mail(mail: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_mail(mail: &str, conn: &mut DbConn) -> Option<Self> {
let lower_mail = mail.to_lowercase();
db_run! {conn: {
users::table
@@ -348,19 +343,19 @@ impl User {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
users::table.filter(users::uuid.eq(uuid)).first::<UserDb>(conn).ok().from_db()
}}
}
pub async fn get_all(conn: &DbConn) -> Vec<Self> {
pub async fn get_all(conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
users::table.load::<UserDb>(conn).expect("Error loading users").from_db()
}}
}
pub async fn last_active(&self, conn: &DbConn) -> Option<NaiveDateTime> {
pub async fn last_active(&self, conn: &mut DbConn) -> Option<NaiveDateTime> {
match Device::find_latest_active_by_user(&self.uuid, conn).await {
Some(device) => Some(device.updated_at),
None => None,
@@ -369,14 +364,14 @@ impl User {
}
impl Invitation {
pub fn new(email: String) -> Self {
pub fn new(email: &str) -> Self {
let email = email.to_lowercase();
Self {
email,
}
}
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
if self.email.trim().is_empty() {
err!("Invitation email can't be empty")
}
@@ -401,7 +396,7 @@ impl Invitation {
}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
db_run! {conn: {
diesel::delete(invitations::table.filter(invitations::email.eq(self.email)))
.execute(conn)
@@ -409,7 +404,7 @@ impl Invitation {
}}
}
pub async fn find_by_mail(mail: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_mail(mail: &str, conn: &mut DbConn) -> Option<Self> {
let lower_mail = mail.to_lowercase();
db_run! {conn: {
invitations::table
@@ -420,7 +415,7 @@ impl Invitation {
}}
}
pub async fn take(mail: &str, conn: &DbConn) -> bool {
pub async fn take(mail: &str, conn: &mut DbConn) -> bool {
match Self::find_by_mail(mail, conn).await {
Some(invitation) => invitation.delete(conn).await.is_ok(),
None => false,

View File

@@ -55,6 +55,27 @@ table! {
}
}
table! {
event (uuid) {
uuid -> Varchar,
event_type -> Integer,
user_uuid -> Nullable<Varchar>,
org_uuid -> Nullable<Varchar>,
cipher_uuid -> Nullable<Varchar>,
collection_uuid -> Nullable<Varchar>,
group_uuid -> Nullable<Varchar>,
org_user_uuid -> Nullable<Varchar>,
act_user_uuid -> Nullable<Varchar>,
device_type -> Nullable<Integer>,
ip_address -> Nullable<Text>,
event_date -> Timestamp,
policy_uuid -> Nullable<Varchar>,
provider_uuid -> Nullable<Varchar>,
provider_user_uuid -> Nullable<Varchar>,
provider_org_uuid -> Nullable<Varchar>,
}
}
table! {
favorites (user_uuid, cipher_uuid) {
user_uuid -> Text,
@@ -220,6 +241,34 @@ table! {
}
}
table! {
groups (uuid) {
uuid -> Text,
organizations_uuid -> Text,
name -> Text,
access_all -> Bool,
external_id -> Nullable<Text>,
creation_date -> Timestamp,
revision_date -> Timestamp,
}
}
table! {
groups_users (groups_uuid, users_organizations_uuid) {
groups_uuid -> Text,
users_organizations_uuid -> Text,
}
}
table! {
collections_groups (collections_uuid, groups_uuid) {
collections_uuid -> Text,
groups_uuid -> Text,
read_only -> Bool,
hide_passwords -> Bool,
}
}
joinable!(attachments -> ciphers (cipher_uuid));
joinable!(ciphers -> organizations (organization_uuid));
joinable!(ciphers -> users (user_uuid));
@@ -239,6 +288,12 @@ joinable!(users_collections -> users (user_uuid));
joinable!(users_organizations -> organizations (org_uuid));
joinable!(users_organizations -> users (user_uuid));
joinable!(emergency_access -> users (grantor_uuid));
joinable!(groups -> organizations (organizations_uuid));
joinable!(groups_users -> users_organizations (users_organizations_uuid));
joinable!(groups_users -> groups (groups_uuid));
joinable!(collections_groups -> collections (collections_uuid));
joinable!(collections_groups -> groups (groups_uuid));
joinable!(event -> users_organizations (uuid));
allow_tables_to_appear_in_same_query!(
attachments,
@@ -257,4 +312,8 @@ allow_tables_to_appear_in_same_query!(
users_collections,
users_organizations,
emergency_access,
groups,
groups_users,
collections_groups,
event,
);

View File

@@ -55,6 +55,27 @@ table! {
}
}
table! {
event (uuid) {
uuid -> Text,
event_type -> Integer,
user_uuid -> Nullable<Text>,
org_uuid -> Nullable<Text>,
cipher_uuid -> Nullable<Text>,
collection_uuid -> Nullable<Text>,
group_uuid -> Nullable<Text>,
org_user_uuid -> Nullable<Text>,
act_user_uuid -> Nullable<Text>,
device_type -> Nullable<Integer>,
ip_address -> Nullable<Text>,
event_date -> Timestamp,
policy_uuid -> Nullable<Text>,
provider_uuid -> Nullable<Text>,
provider_user_uuid -> Nullable<Text>,
provider_org_uuid -> Nullable<Text>,
}
}
table! {
favorites (user_uuid, cipher_uuid) {
user_uuid -> Text,
@@ -220,6 +241,34 @@ table! {
}
}
table! {
groups (uuid) {
uuid -> Text,
organizations_uuid -> Text,
name -> Text,
access_all -> Bool,
external_id -> Nullable<Text>,
creation_date -> Timestamp,
revision_date -> Timestamp,
}
}
table! {
groups_users (groups_uuid, users_organizations_uuid) {
groups_uuid -> Text,
users_organizations_uuid -> Text,
}
}
table! {
collections_groups (collections_uuid, groups_uuid) {
collections_uuid -> Text,
groups_uuid -> Text,
read_only -> Bool,
hide_passwords -> Bool,
}
}
joinable!(attachments -> ciphers (cipher_uuid));
joinable!(ciphers -> organizations (organization_uuid));
joinable!(ciphers -> users (user_uuid));
@@ -239,6 +288,12 @@ joinable!(users_collections -> users (user_uuid));
joinable!(users_organizations -> organizations (org_uuid));
joinable!(users_organizations -> users (user_uuid));
joinable!(emergency_access -> users (grantor_uuid));
joinable!(groups -> organizations (organizations_uuid));
joinable!(groups_users -> users_organizations (users_organizations_uuid));
joinable!(groups_users -> groups (groups_uuid));
joinable!(collections_groups -> collections (collections_uuid));
joinable!(collections_groups -> groups (groups_uuid));
joinable!(event -> users_organizations (uuid));
allow_tables_to_appear_in_same_query!(
attachments,
@@ -257,4 +312,8 @@ allow_tables_to_appear_in_same_query!(
users_collections,
users_organizations,
emergency_access,
groups,
groups_users,
collections_groups,
event,
);

View File

@@ -55,6 +55,27 @@ table! {
}
}
table! {
event (uuid) {
uuid -> Text,
event_type -> Integer,
user_uuid -> Nullable<Text>,
org_uuid -> Nullable<Text>,
cipher_uuid -> Nullable<Text>,
collection_uuid -> Nullable<Text>,
group_uuid -> Nullable<Text>,
org_user_uuid -> Nullable<Text>,
act_user_uuid -> Nullable<Text>,
device_type -> Nullable<Integer>,
ip_address -> Nullable<Text>,
event_date -> Timestamp,
policy_uuid -> Nullable<Text>,
provider_uuid -> Nullable<Text>,
provider_user_uuid -> Nullable<Text>,
provider_org_uuid -> Nullable<Text>,
}
}
table! {
favorites (user_uuid, cipher_uuid) {
user_uuid -> Text,
@@ -220,6 +241,34 @@ table! {
}
}
table! {
groups (uuid) {
uuid -> Text,
organizations_uuid -> Text,
name -> Text,
access_all -> Bool,
external_id -> Nullable<Text>,
creation_date -> Timestamp,
revision_date -> Timestamp,
}
}
table! {
groups_users (groups_uuid, users_organizations_uuid) {
groups_uuid -> Text,
users_organizations_uuid -> Text,
}
}
table! {
collections_groups (collections_uuid, groups_uuid) {
collections_uuid -> Text,
groups_uuid -> Text,
read_only -> Bool,
hide_passwords -> Bool,
}
}
joinable!(attachments -> ciphers (cipher_uuid));
joinable!(ciphers -> organizations (organization_uuid));
joinable!(ciphers -> users (user_uuid));
@@ -238,7 +287,14 @@ joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid));
joinable!(users_organizations -> organizations (org_uuid));
joinable!(users_organizations -> users (user_uuid));
joinable!(users_organizations -> ciphers (org_uuid));
joinable!(emergency_access -> users (grantor_uuid));
joinable!(groups -> organizations (organizations_uuid));
joinable!(groups_users -> users_organizations (users_organizations_uuid));
joinable!(groups_users -> groups (groups_uuid));
joinable!(collections_groups -> collections (collections_uuid));
joinable!(collections_groups -> groups (groups_uuid));
joinable!(event -> users_organizations (uuid));
allow_tables_to_appear_in_same_query!(
attachments,
@@ -257,4 +313,8 @@ allow_tables_to_appear_in_same_query!(
users_collections,
users_organizations,
emergency_access,
groups,
groups_users,
collections_groups,
event,
);

View File

@@ -1,6 +1,7 @@
//
// Error generator macro
//
use crate::db::models::EventType;
use std::error::Error as StdError;
macro_rules! make_error {
@@ -8,14 +9,17 @@ macro_rules! make_error {
const BAD_REQUEST: u16 = 400;
pub enum ErrorKind { $($name( $ty )),+ }
pub struct Error { message: String, error: ErrorKind, error_code: u16 }
#[derive(Debug)]
pub struct ErrorEvent { pub event: EventType }
pub struct Error { message: String, error: ErrorKind, error_code: u16, event: Option<ErrorEvent> }
$(impl From<$ty> for Error {
fn from(err: $ty) -> Self { Error::from((stringify!($name), err)) }
})+
$(impl<S: Into<String>> From<(S, $ty)> for Error {
fn from(val: (S, $ty)) -> Self {
Error { message: val.0.into(), error: ErrorKind::$name(val.1), error_code: BAD_REQUEST }
Error { message: val.0.into(), error: ErrorKind::$name(val.1), error_code: BAD_REQUEST, event: None }
}
})+
impl StdError for Error {
@@ -36,7 +40,6 @@ macro_rules! make_error {
use diesel::r2d2::PoolError as R2d2Err;
use diesel::result::Error as DieselErr;
use diesel::ConnectionError as DieselConErr;
use diesel_migrations::RunMigrationsError as DieselMigErr;
use handlebars::RenderError as HbErr;
use jsonwebtoken::errors::Error as JwtErr;
use lettre::address::AddressError as AddrErr;
@@ -87,7 +90,6 @@ make_error! {
Rocket(RocketErr): _has_source, _api_error,
DieselCon(DieselConErr): _has_source, _api_error,
DieselMig(DieselMigErr): _has_source, _api_error,
Webauthn(WebauthnErr): _has_source, _api_error,
WebSocket(TungstError): _has_source, _api_error,
}
@@ -132,6 +134,16 @@ impl Error {
self.error_code = code;
self
}
#[must_use]
pub fn with_event(mut self, event: ErrorEvent) -> Self {
self.event = Some(event);
self
}
pub fn get_event(&self) -> &Option<ErrorEvent> {
&self.event
}
}
pub trait MapResult<S> {
@@ -156,7 +168,6 @@ impl<S> MapResult<S> for Option<S> {
}
}
#[allow(clippy::unnecessary_wraps)]
const fn _has_source<T>(e: T) -> Option<T> {
Some(e)
}
@@ -218,12 +229,21 @@ macro_rules! err {
error!("{}", $msg);
return Err($crate::error::Error::new($msg, $msg));
}};
($msg:expr, ErrorEvent $err_event:tt) => {{
error!("{}", $msg);
return Err($crate::error::Error::new($msg, $msg).with_event($crate::error::ErrorEvent $err_event));
}};
($usr_msg:expr, $log_value:expr) => {{
error!("{}. {}", $usr_msg, $log_value);
return Err($crate::error::Error::new($usr_msg, $log_value));
}};
($usr_msg:expr, $log_value:expr, ErrorEvent $err_event:tt) => {{
error!("{}. {}", $usr_msg, $log_value);
return Err($crate::error::Error::new($usr_msg, $log_value).with_event($crate::error::ErrorEvent $err_event));
}};
}
#[macro_export]
macro_rules! err_silent {
($msg:expr) => {{
return Err($crate::error::Error::new($msg, $msg));
@@ -235,11 +255,11 @@ macro_rules! err_silent {
#[macro_export]
macro_rules! err_code {
($msg:expr, $err_code: expr) => {{
($msg:expr, $err_code:expr) => {{
error!("{}", $msg);
return Err($crate::error::Error::new($msg, $msg).with_code($err_code));
}};
($usr_msg:expr, $log_value:expr, $err_code: expr) => {{
($usr_msg:expr, $log_value:expr, $err_code:expr) => {{
error!("{}. {}", $usr_msg, $log_value);
return Err($crate::error::Error::new($usr_msg, $log_value).with_code($err_code));
}};
@@ -262,6 +282,9 @@ macro_rules! err_json {
($expr:expr, $log_value:expr) => {{
return Err(($log_value, $expr).into());
}};
($expr:expr, $log_value:expr, $err_event:expr, ErrorEvent) => {{
return Err(($log_value, $expr).into().with_event($err_event));
}};
}
#[macro_export]

View File

@@ -4,7 +4,7 @@ use chrono::NaiveDateTime;
use percent_encoding::{percent_encode, NON_ALPHANUMERIC};
use lettre::{
message::{Mailbox, Message, MultiPart},
message::{Attachment, Body, Mailbox, Message, MultiPart, SinglePart},
transport::smtp::authentication::{Credentials, Mechanism as SmtpAuthMechanism},
transport::smtp::client::{Tls, TlsParameters},
transport::smtp::extension::ClientId,
@@ -117,7 +117,14 @@ pub async fn send_password_hint(address: &str, hint: Option<String>) -> EmptyRes
"email/pw_hint_none"
};
let (subject, body_html, body_text) = get_text(template_name, json!({ "hint": hint, "url": CONFIG.domain() }))?;
let (subject, body_html, body_text) = get_text(
template_name,
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"hint": hint,
}),
)?;
send_email(address, &subject, body_html, body_text).await
}
@@ -130,6 +137,7 @@ pub async fn send_delete_account(address: &str, uuid: &str) -> EmptyResult {
"email/delete_account",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"user_id": uuid,
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
"token": delete_token,
@@ -147,6 +155,7 @@ pub async fn send_verify_email(address: &str, uuid: &str) -> EmptyResult {
"email/verify_email",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"user_id": uuid,
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
"token": verify_email_token,
@@ -161,6 +170,7 @@ pub async fn send_welcome(address: &str) -> EmptyResult {
"email/welcome",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
}),
)?;
@@ -175,6 +185,7 @@ pub async fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult
"email/welcome_must_verify",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"user_id": uuid,
"token": verify_email_token,
}),
@@ -188,6 +199,7 @@ pub async fn send_2fa_removed_from_org(address: &str, org_name: &str) -> EmptyRe
"email/send_2fa_removed_from_org",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"org_name": org_name,
}),
)?;
@@ -200,6 +212,7 @@ pub async fn send_single_org_removed_from_org(address: &str, org_name: &str) ->
"email/send_single_org_removed_from_org",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"org_name": org_name,
}),
)?;
@@ -228,6 +241,7 @@ pub async fn send_invite(
"email/send_org_invite",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"org_id": org_id.as_deref().unwrap_or("_"),
"org_user_id": org_user_id.as_deref().unwrap_or("_"),
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
@@ -242,16 +256,16 @@ pub async fn send_invite(
pub async fn send_emergency_access_invite(
address: &str,
uuid: &str,
emer_id: Option<String>,
grantor_name: Option<String>,
grantor_email: Option<String>,
emer_id: &str,
grantor_name: &str,
grantor_email: &str,
) -> EmptyResult {
let claims = generate_emergency_access_invite_claims(
uuid.to_string(),
String::from(uuid),
String::from(address),
emer_id.clone(),
grantor_name.clone(),
grantor_email,
String::from(emer_id),
String::from(grantor_name),
String::from(grantor_email),
);
let invite_token = encode_jwt(&claims);
@@ -260,7 +274,8 @@ pub async fn send_emergency_access_invite(
"email/send_emergency_access_invite",
json!({
"url": CONFIG.domain(),
"emer_id": emer_id.unwrap_or_else(|| "_".to_string()),
"img_src": CONFIG._smtp_img_src(),
"emer_id": emer_id,
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
"grantor_name": grantor_name,
"token": invite_token,
@@ -275,6 +290,7 @@ pub async fn send_emergency_access_invite_accepted(address: &str, grantee_email:
"email/emergency_access_invite_accepted",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"grantee_email": grantee_email,
}),
)?;
@@ -287,6 +303,7 @@ pub async fn send_emergency_access_invite_confirmed(address: &str, grantor_name:
"email/emergency_access_invite_confirmed",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"grantor_name": grantor_name,
}),
)?;
@@ -299,6 +316,7 @@ pub async fn send_emergency_access_recovery_approved(address: &str, grantor_name
"email/emergency_access_recovery_approved",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"grantor_name": grantor_name,
}),
)?;
@@ -310,12 +328,13 @@ pub async fn send_emergency_access_recovery_initiated(
address: &str,
grantee_name: &str,
atype: &str,
wait_time_days: &str,
wait_time_days: &i32,
) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/emergency_access_recovery_initiated",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"grantee_name": grantee_name,
"atype": atype,
"wait_time_days": wait_time_days,
@@ -335,6 +354,7 @@ pub async fn send_emergency_access_recovery_reminder(
"email/emergency_access_recovery_reminder",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"grantee_name": grantee_name,
"atype": atype,
"days_left": days_left,
@@ -349,6 +369,7 @@ pub async fn send_emergency_access_recovery_rejected(address: &str, grantor_name
"email/emergency_access_recovery_rejected",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"grantor_name": grantor_name,
}),
)?;
@@ -361,6 +382,7 @@ pub async fn send_emergency_access_recovery_timed_out(address: &str, grantee_nam
"email/emergency_access_recovery_timed_out",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"grantee_name": grantee_name,
"atype": atype,
}),
@@ -374,6 +396,7 @@ pub async fn send_invite_accepted(new_user_email: &str, address: &str, org_name:
"email/invite_accepted",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"email": new_user_email,
"org_name": org_name,
}),
@@ -387,6 +410,7 @@ pub async fn send_invite_confirmed(address: &str, org_name: &str) -> EmptyResult
"email/invite_confirmed",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"org_name": org_name,
}),
)?;
@@ -403,6 +427,7 @@ pub async fn send_new_device_logged_in(address: &str, ip: &str, dt: &NaiveDateTi
"email/new_device_logged_in",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"ip": ip,
"device": device,
"datetime": crate::util::format_naive_datetime_local(dt, fmt),
@@ -421,6 +446,7 @@ pub async fn send_incomplete_2fa_login(address: &str, ip: &str, dt: &NaiveDateTi
"email/incomplete_2fa_login",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"ip": ip,
"device": device,
"datetime": crate::util::format_naive_datetime_local(dt, fmt),
@@ -436,6 +462,7 @@ pub async fn send_token(address: &str, token: &str) -> EmptyResult {
"email/twofactor_email",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"token": token,
}),
)?;
@@ -448,6 +475,7 @@ pub async fn send_change_email(address: &str, token: &str) -> EmptyResult {
"email/change_email",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"token": token,
}),
)?;
@@ -460,6 +488,7 @@ pub async fn send_test(address: &str) -> EmptyResult {
"email/smtp_test",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
}),
)?;
@@ -468,12 +497,32 @@ pub async fn send_test(address: &str) -> EmptyResult {
async fn send_email(address: &str, subject: &str, body_html: String, body_text: String) -> EmptyResult {
let smtp_from = &CONFIG.smtp_from();
let body = if CONFIG.smtp_embed_images() {
let logo_gray_body = Body::new(crate::api::static_files("logo-gray.png".to_string()).unwrap().1.to_vec());
let mail_github_body = Body::new(crate::api::static_files("mail-github.png".to_string()).unwrap().1.to_vec());
MultiPart::alternative().singlepart(SinglePart::plain(body_text)).multipart(
MultiPart::related()
.singlepart(SinglePart::html(body_html))
.singlepart(
Attachment::new_inline(String::from("logo-gray.png"))
.body(logo_gray_body, "image/png".parse().unwrap()),
)
.singlepart(
Attachment::new_inline(String::from("mail-github.png"))
.body(mail_github_body, "image/png".parse().unwrap()),
),
)
} else {
MultiPart::alternative_plain_html(body_text, body_html)
};
let email = Message::builder()
.message_id(Some(format!("<{}@{}>", crate::util::get_uuid(), smtp_from.split('@').collect::<Vec<&str>>()[1])))
.to(Mailbox::new(None, Address::from_str(address)?))
.from(Mailbox::new(Some(CONFIG.smtp_from_name()), Address::from_str(smtp_from)?))
.subject(subject)
.multipart(MultiPart::alternative_plain_html(body_text, body_html))?;
.multipart(body)?;
match mailer().send(email).await {
Ok(_) => Ok(()),

View File

@@ -12,9 +12,13 @@
clippy::equatable_if_let,
clippy::float_cmp_const,
clippy::inefficient_to_string,
clippy::iter_on_empty_collections,
clippy::iter_on_single_items,
clippy::linkedlist,
clippy::macro_use_imports,
clippy::manual_assert,
clippy::manual_instant_elapsed,
clippy::manual_string_new,
clippy::match_wildcard_for_single_variants,
clippy::mem_forget,
clippy::string_add_assign,
@@ -30,7 +34,7 @@
// The more key/value pairs there are the more recursion occurs.
// We want to keep this as low as possible, but not higher then 128.
// If you go above 128 it will cause rust-analyzer to fail,
#![recursion_limit = "87"]
#![recursion_limit = "94"]
// When enabled use MiMalloc as malloc instead of the default malloc
#[cfg(feature = "enable_mimalloc")]
@@ -61,6 +65,11 @@ use std::{
thread,
};
use tokio::{
fs::File,
io::{AsyncBufReadExt, BufReader},
};
#[macro_use]
mod error;
mod api;
@@ -89,7 +98,7 @@ async fn main() -> Result<(), Error> {
let extra_debug = matches!(level, LF::Trace | LF::Debug);
check_data_folder();
check_data_folder().await;
check_rsa_keys().unwrap_or_else(|_| {
error!("Error creating keys, exiting...");
exit(1);
@@ -103,7 +112,7 @@ async fn main() -> Result<(), Error> {
let pool = create_db_pool().await;
schedule_jobs(pool.clone()).await;
crate::db::models::TwoFactor::migrate_u2f_to_webauthn(&pool.get().await.unwrap()).await.unwrap();
crate::db::models::TwoFactor::migrate_u2f_to_webauthn(&mut pool.get().await.unwrap()).await.unwrap();
launch_rocket(pool, extra_debug).await // Blocks until program termination.
}
@@ -162,6 +171,13 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
log::LevelFilter::Off
};
let diesel_logger_level: log::LevelFilter =
if cfg!(feature = "query_logger") && std::env::var("QUERY_LOGGER").is_ok() {
log::LevelFilter::Debug
} else {
log::LevelFilter::Off
};
let mut logger = fern::Dispatch::new()
.level(level)
// Hide unknown certificate errors if using self-signed
@@ -182,6 +198,7 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
.level_for("cookie_store", log::LevelFilter::Off)
// Variable level for trust-dns used by reqwest
.level_for("trust_dns_proto", trust_dns_level)
.level_for("diesel_logger", diesel_logger_level)
.chain(std::io::stdout());
// Enable smtp debug logging only specifically for smtp when need.
@@ -286,7 +303,7 @@ fn create_dir(path: &str, description: &str) {
create_dir_all(path).expect(&err_msg);
}
fn check_data_folder() {
async fn check_data_folder() {
let data_folder = &CONFIG.data_folder();
let path = Path::new(data_folder);
if !path.exists() {
@@ -298,10 +315,15 @@ fn check_data_folder() {
}
exit(1);
}
if !path.is_dir() {
error!("Data folder '{}' is not a directory.", data_folder);
exit(1);
}
let persistent_volume_check_file = format!("{data_folder}/vaultwarden_docker_persistent_volume_check");
let check_file = Path::new(&persistent_volume_check_file);
if check_file.exists() && std::env::var("I_REALLY_WANT_VOLATILE_STORAGE").is_err() {
if is_running_in_docker()
&& std::env::var("I_REALLY_WANT_VOLATILE_STORAGE").is_err()
&& !docker_data_folder_is_persistent(data_folder).await
{
error!(
"No persistent volume!\n\
########################################################################################\n\
@@ -314,6 +336,38 @@ fn check_data_folder() {
}
}
/// Detect when using Docker or Podman the DATA_FOLDER is either a bind-mount or a volume created manually.
/// If not created manually, then the data will not be persistent.
/// A none persistent volume in either Docker or Podman is represented by a 64 alphanumerical string.
/// If we detect this string, we will alert about not having a persistent self defined volume.
/// This probably means that someone forgot to add `-v /path/to/vaultwarden_data/:/data`
async fn docker_data_folder_is_persistent(data_folder: &str) -> bool {
if let Ok(mountinfo) = File::open("/proc/self/mountinfo").await {
// Since there can only be one mountpoint to the DATA_FOLDER
// We do a basic check for this mountpoint surrounded by a space.
let data_folder_match = if data_folder.starts_with('/') {
format!(" {data_folder} ")
} else {
format!(" /{data_folder} ")
};
let mut lines = BufReader::new(mountinfo).lines();
while let Some(line) = lines.next_line().await.unwrap_or_default() {
// Only execute a regex check if we find the base match
if line.contains(&data_folder_match) {
let re = regex::Regex::new(r"/volumes/[a-z0-9]{64}/_data /").unwrap();
if re.is_match(&line) {
return false;
}
// If we did found a match for the mountpoint, but not the regex, then still stop searching.
break;
}
}
}
// In all other cases, just assume a true.
// This is just an informative check to try and prevent data loss.
true
}
fn check_rsa_keys() -> Result<(), crate::error::Error> {
// If the RSA keys don't exist, try to create them
let priv_path = CONFIG.private_rsa_key();
@@ -384,9 +438,13 @@ async fn launch_rocket(pool: db::DbPool, extra_debug: bool) -> Result<(), Error>
.mount([basepath, "/"].concat(), api::web_routes())
.mount([basepath, "/api"].concat(), api::core_routes())
.mount([basepath, "/admin"].concat(), api::admin_routes())
.mount([basepath, "/events"].concat(), api::core_events_routes())
.mount([basepath, "/identity"].concat(), api::identity_routes())
.mount([basepath, "/icons"].concat(), api::icons_routes())
.mount([basepath, "/notifications"].concat(), api::notifications_routes())
.register([basepath, "/"].concat(), api::web_catchers())
.register([basepath, "/api"].concat(), api::core_catchers())
.register([basepath, "/admin"].concat(), api::admin_catchers())
.manage(pool)
.manage(api::start_notification_server())
.attach(util::AppHeaders())
@@ -396,11 +454,12 @@ async fn launch_rocket(pool: db::DbPool, extra_debug: bool) -> Result<(), Error>
.await?;
CONFIG.set_rocket_shutdown_handle(instance.shutdown());
ctrlc::set_handler(move || {
tokio::spawn(async move {
tokio::signal::ctrl_c().await.expect("Error setting Ctrl-C handler");
info!("Exiting vaultwarden!");
CONFIG.shutdown();
})
.expect("Error setting Ctrl-C handler");
});
let _ = instance.launch().await?;
@@ -463,6 +522,16 @@ async fn schedule_jobs(pool: db::DbPool) {
}));
}
// Cleanup the event table of records x days old.
if CONFIG.org_events_enabled()
&& !CONFIG.event_cleanup_schedule().is_empty()
&& CONFIG.events_days_retain().is_some()
{
sched.add(Job::new(CONFIG.event_cleanup_schedule().parse().unwrap(), || {
runtime.spawn(api::event_cleanup_job(pool.clone()));
}));
}
// Periodically check for jobs to run. We probably won't need any
// jobs that run more often than once a minute, so a default poll
// interval of 30 seconds should be sufficient. Users who want to

View File

@@ -186,6 +186,7 @@
"Type": 18,
"Domains": [
"amazon.com",
"amazon.com.be",
"amazon.ae",
"amazon.ca",
"amazon.co.uk",
@@ -939,5 +940,17 @@
"pyszne.pl"
],
"Excluded": false
},
{
"Type": 89,
"Domains": [
"atlassian.com",
"bitbucket.org",
"trello.com",
"statuspage.io",
"atlassian.net",
"jira.com"
],
"Excluded": false
}
]

BIN
src/static/images/404.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 331 B

After

Width:  |  Height:  |  Size: 483 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.5 KiB

After

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 886 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 945 B

After

Width:  |  Height:  |  Size: 1.4 KiB

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More