The API should only return the real Mail of a User, if the caller is
logged in. The check do to this don't work. This PR fixes this. This not
really a security issue, but can lead to Spam.
---------
Co-authored-by: silverwind <me@silverwind.io>
(cherry picked from commit ea385f5d39)
- The action tables can become very large as it's a dumpster for every
action that an user does on an repository.
- The following query: `DELETE FROM action WHERE comment_id IN (SELECT id FROM comment WHERE
issue_id=?)` is not using indexes for `comment_id` and is instead using
an full table scan by MariaDB.
- Rewriting the query to use an JOIN will allow MariaDB to use the
index.
- More information: https://codeberg.org/Codeberg-Infrastructure/techstack-support/issues/9
- Backport https://codeberg.org/forgejo/forgejo/pulls/1154
Backport #26182 by @Zettat123
Fix#25934
Add `ignoreGlobal` parameter to `reqUnitAccess` and only check global
disabled units when `ignoreGlobal` is true. So the org-level projects
and user-level projects won't be affected by global disabled
`repo.projects` unit.
Co-authored-by: Zettat123 <zettat123@gmail.com>
(cherry picked from commit 3a29712e0a)
Backport #26192 by @KN4CK3R
Fixes#25918
The migration fails on MSSQL because xorm tries to update the primary
key column. xorm prevents this if the column is marked as auto
increment:
c622cdaf89/internal/statements/update.go (L38-L40)
I think it would be better if xorm would check for primary key columns
here because updating such columns is bad practice. It looks like if
that auto increment check should do the same.
fyi @lunny
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
(cherry picked from commit ecfbcced46)
Backport #26122 by @Zettat123
This PR
- Fix#26093. Replace `time.Time` with `timeutil.TimeStamp`
- Fix#26135. Add missing `xorm:"extends"` to `CountLFSMetaObject` for
LFS meta object query
- Add a unit test for LFS meta object garbage collection
Co-authored-by: Zettat123 <zettat123@gmail.com>
(cherry picked from commit a12d036a68)
- Follow up for: #540, #802
- Add API routes for user blocking from user and organization
perspective.
- The new routes have integration testing.
- The new model functions have unit tests.
- Actually quite boring to write and to read this pull request.
(cherry picked from commit f3afaf15c7)
(cherry picked from commit 6d754db3e5)
(cherry picked from commit d0fc8bc9d3)
(cherry picked from commit 9a53b0d1a0)
(cherry picked from commit 44a2a4fd48)
(cherry picked from commit 182025db9c)
(cherry picked from commit 558a35963e)
- Resolves#476
- Follow up for: #540
- Ensure that the doer and blocked person cannot follow each other.
- Ensure that the block person cannot watch doer's repositories.
- Add unblock button to the blocked user list.
- Add blocked since information to the blocked user list.
- Add extra testing to moderation code.
- Blocked user will unwatch doer's owned repository upon blocking.
- Add flash messages to let the user know the block/unblock action was successful.
- Add "You haven't blocked any users" message.
- Add organization blocking a user.
Co-authored-by: Gusted <postmaster@gusted.xyz>
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/802
(cherry picked from commit 0505a10421)
(cherry picked from commit 37b4e6ef9b)
(cherry picked from commit 217475385a)
(cherry picked from commit f2c38ce5c2)
(cherry picked from commit 1edfb68137)
(cherry picked from commit 2cbc12dc74)
(cherry picked from commit 79ff020f18)
- Add the ability to block a user via their profile page.
- This will unstar their repositories and visa versa.
- Blocked users cannot create issues or pull requests on your the doer's repositories (mind that this is not the case for organizations).
- Blocked users cannot comment on the doer's opened issues or pull requests.
- Blocked users cannot add reactions to doer's comments.
- Blocked users cannot cause a notification trough mentioning the doer.
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/540
(cherry picked from commit 687d852480)
(cherry picked from commit 0c32a4fde5)
(cherry picked from commit 1791130e3c)
(cherry picked from commit 00f411819f)
(cherry picked from commit e0c039b0e8)
(cherry picked from commit b5a058ef00)
(cherry picked from commit 5ff5460d28)
(cherry picked from commit 97bc6e619d)
- Implements https://codeberg.org/forgejo/discussions/issues/32#issuecomment-918737
- Allows to add Forgejo-specific migrations that don't interfere with Gitea's migration logic. Please do note that we cannot liberally add migrations for Gitea tables, as they might do their own migrations in a future version on that table, and that could undo our migrations. Luckily, we don't have a scenario where that's needed and thus not taken into account.
Co-authored-by: Gusted <postmaster@gusted.xyz>
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/795
(cherry picked from commit 8ee32978c0)
(cherry picked from commit c240b34f59)
(cherry picked from commit 03936c6492)
(cherry picked from commit 8bd051e6df)
(cherry picked from commit 2c55a40c79)
(cherry picked from commit 260d938e92)
(cherry picked from commit cf7c08031f)
(cherry picked from commit 59c1547517)
Backport #25806 by @yp05327
sort type `oldest` should be `Asc`.
Added a test for this.
I see we have `SearchOrderBy` in db model, but we are using many
different ways to define the sort type.
~Maybe we can improve this later.~
↑ Improved in this PR
Co-authored-by: yp05327 <576951401@qq.com>
Backport #25707 by @KN4CK3R
Fixes (?) #25538
Fixes https://codeberg.org/forgejo/forgejo/issues/972
Regression #23879#23879 introduced a change which prevents read access to packages if a
user is not a member of an organization.
That PR also contained a change which disallows package access if the
team unit is configured with "no access" for packages. I don't think
this change makes sense (at the moment). It may be relevant for private
orgs. But for public or limited orgs that's useless because an
unauthorized user would have more access rights than the team member.
This PR restores the old behaviour "If a user has read access for an
owner, they can read packages".
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Backport #25734 by @KN4CK3R
The method is only used in the test. Found it because I changed the
fixtures and had a hard time fixing this test. My revenge is deleting
it.
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Backport #22759 by @KN4CK3R
related #16865
This PR adds an accessibility check before mounting container blobs.
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: silverwind <me@silverwind.io>
Backport #25560 by @wolfogre
Fix#25451.
Bugfixes:
- When stopping the zombie or endless tasks, set `LogInStorage` to true
after transferring the file to storage. It was missing, it could write
to a nonexistent file in DBFS because `LogInStorage` was false.
- Always update `ActionTask.Updated` when there's a new state reported
by the runner, even if there's no change. This is to avoid the task
being judged as a zombie task.
Enhancement:
- Support `Stat()` for DBFS file.
- `WriteLogs` refuses to write if it could result in content holes.
Co-authored-by: Jason Song <i@wolfogre.com>
Backport #25330
# The problem
There were many "path tricks":
* By default, Gitea uses its program directory as its work path
* Gitea tries to use the "work path" to guess its "custom path" and
"custom conf (app.ini)"
* Users might want to use other directories as work path
* The non-default work path should be passed to Gitea by GITEA_WORK_DIR
or "--work-path"
* But some Gitea processes are started without these values
* The "serv" process started by OpenSSH server
* The CLI sub-commands started by site admin
* The paths are guessed by SetCustomPathAndConf again and again
* The default values of "work path / custom path / custom conf" can be
changed when compiling
# The solution
* Use `InitWorkPathAndCommonConfig` to handle these path tricks, and use
test code to cover its behaviors.
* When Gitea's web server runs, write the WORK_PATH to "app.ini", this
value must be the most correct one, because if this value is not right,
users would find that the web UI doesn't work and then they should be
able to fix it.
* Then all other sub-commands can use the WORK_PATH in app.ini to
initialize their paths.
* By the way, when Gitea starts for git protocol, it shouldn't output
any log, otherwise the git protocol gets broken and client blocks
forever.
The "work path" priority is: WORK_PATH in app.ini > cmd arg --work-path
> env var GITEA_WORK_DIR > builtin default
The "app.ini" searching order is: cmd arg --config > cmd arg "work path
/ custom path" > env var "work path / custom path" > builtin default
## ⚠️ BREAKING
If your instance's "work path / custom path / custom conf" doesn't meet
the requirements (eg: work path must be absolute), Gitea will report a
fatal error and exit. You need to set these values according to the
error log.
Backport #23911 by @lunny
Follow up #22405Fix#20703
This PR rewrites storage configuration read sequences with some breaks
and tests. It becomes more strict than before and also fixed some
inherit problems.
- Move storage's MinioConfig struct into setting, so after the
configuration loading, the values will be stored into the struct but not
still on some section.
- All storages configurations should be stored on one section,
configuration items cannot be overrided by multiple sections. The
prioioty of configuration is `[attachment]` > `[storage.attachments]` |
`[storage.customized]` > `[storage]` > `default`
- For extra override configuration items, currently are `SERVE_DIRECT`,
`MINIO_BASE_PATH`, `MINIO_BUCKET`, which could be configured in another
section. The prioioty of the override configuration is `[attachment]` >
`[storage.attachments]` > `default`.
- Add more tests for storages configurations.
- Update the storage documentations.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Backport #25214 by @KN4CK3R
The ghost user leads to inclusion of limited users/orgs in
`BuildCanSeeUserCondition`.
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
## Changes
- Adds the following high level access scopes, each with `read` and
`write` levels:
- `activitypub`
- `admin` (hidden if user is not a site admin)
- `misc`
- `notification`
- `organization`
- `package`
- `issue`
- `repository`
- `user`
- Adds new middleware function `tokenRequiresScopes()` in addition to
`reqToken()`
- `tokenRequiresScopes()` is used for each high-level api section
- _if_ a scoped token is present, checks that the required scope is
included based on the section and HTTP method
- `reqToken()` is used for individual routes
- checks that required authentication is present (but does not check
scope levels as this will already have been handled by
`tokenRequiresScopes()`
- Adds migration to convert old scoped access tokens to the new set of
scopes
- Updates the user interface for scope selection
### User interface example
<img width="903" alt="Screen Shot 2023-05-31 at 1 56 55 PM"
src="https://github.com/go-gitea/gitea/assets/23248839/654766ec-2143-4f59-9037-3b51600e32f3">
<img width="917" alt="Screen Shot 2023-05-31 at 1 56 43 PM"
src="https://github.com/go-gitea/gitea/assets/23248839/1ad64081-012c-4a73-b393-66b30352654c">
## tokenRequiresScopes Design Decision
- `tokenRequiresScopes()` was added to more reliably cover api routes.
For an incoming request, this function uses the given scope category
(say `AccessTokenScopeCategoryOrganization`) and the HTTP method (say
`DELETE`) and verifies that any scoped tokens in use include
`delete:organization`.
- `reqToken()` is used to enforce auth for individual routes that
require it. If a scoped token is not present for a request,
`tokenRequiresScopes()` will not return an error
## TODO
- [x] Alphabetize scope categories
- [x] Change 'public repos only' to a radio button (private vs public).
Also expand this to organizations
- [X] Disable token creation if no scopes selected. Alternatively, show
warning
- [x] `reqToken()` is missing from many `POST/DELETE` routes in the api.
`tokenRequiresScopes()` only checks that a given token has the correct
scope, `reqToken()` must be used to check that a token (or some other
auth) is present.
- _This should be addressed in this PR_
- [x] The migration should be reviewed very carefully in order to
minimize access changes to existing user tokens.
- _This should be addressed in this PR_
- [x] Link to api to swagger documentation, clarify what
read/write/delete levels correspond to
- [x] Review cases where more than one scope is needed as this directly
deviates from the api definition.
- _This should be addressed in this PR_
- For example:
```go
m.Group("/users/{username}/orgs", func() {
m.Get("", reqToken(), org.ListUserOrgs)
m.Get("/{org}/permissions", reqToken(), org.GetUserOrgsPermissions)
}, tokenRequiresScopes(auth_model.AccessTokenScopeCategoryUser,
auth_model.AccessTokenScopeCategoryOrganization),
context_service.UserAssignmentAPI())
```
## Future improvements
- [ ] Add required scopes to swagger documentation
- [ ] Redesign `reqToken()` to be opt-out rather than opt-in
- [ ] Subdivide scopes like `repository`
- [ ] Once a token is created, if it has no scopes, we should display
text instead of an empty bullet point
- [ ] If the 'public repos only' option is selected, should read
categories be selected by default
Closes#24501Closes#24799
Co-authored-by: Jonathan Tran <jon@allspice.io>
Co-authored-by: Kyle D <kdumontnu@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Before, Gitea shows the database table stats on the `admin dashboard`
page.
It has some problems:
* `count(*)` is quite heavy. If tables have many records, this blocks
loading the admin page blocks for a long time
* Some users had even reported issues that they can't visit their admin
page because this page causes blocking or `50x error (reverse proxy
timeout)`
* The `actions` stat is not useful. The table is simply too large. Does
it really matter if it contains 1,000,000 rows or 9,999,999 rows?
* The translation `admin.dashboard.statistic_info` is difficult to
maintain.
So, this PR uses a separate page to show the stats and removes the
`actions` stat.
![image](https://github.com/go-gitea/gitea/assets/2114189/babf7c61-b93b-4a62-bfaa-22983636427e)
## ⚠️ BREAKING
The `actions` Prometheus metrics collector has been removed for the
reasons mentioned beforehand.
Please do not rely on its output anymore.
This addressees some things from #24406 that came up after the PR was
merged. Mostly from @delvh.
---------
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: delvh <dev.lh@web.de>
This PR replaces all string refName as a type `git.RefName` to make the
code more maintainable.
Fix#15367
Replaces #23070
It also fixed a bug that tags are not sync because `git remote --prune
origin` will not remove local tags if remote removed.
We in fact should use `git fetch --prune --tags origin` but not `git
remote update origin` to do the sync.
Some answer from ChatGPT as ref.
> If the git fetch --prune --tags command is not working as expected,
there could be a few reasons why. Here are a few things to check:
>
>Make sure that you have the latest version of Git installed on your
system. You can check the version by running git --version in your
terminal. If you have an outdated version, try updating Git and see if
that resolves the issue.
>
>Check that your Git repository is properly configured to track the
remote repository's tags. You can check this by running git config
--get-all remote.origin.fetch and verifying that it includes
+refs/tags/*:refs/tags/*. If it does not, you can add it by running git
config --add remote.origin.fetch "+refs/tags/*:refs/tags/*".
>
>Verify that the tags you are trying to prune actually exist on the
remote repository. You can do this by running git ls-remote --tags
origin to list all the tags on the remote repository.
>
>Check if any local tags have been created that match the names of tags
on the remote repository. If so, these local tags may be preventing the
git fetch --prune --tags command from working properly. You can delete
local tags using the git tag -d command.
---------
Co-authored-by: delvh <dev.lh@web.de>