* Remove Salmon and PubSubHubbub endpoints
* Add error when trying to follow OStatus accounts
* Fix new accounts not being created in ResolveAccountService
* Add request pool to improve delivery performance
Fix#7909
* Ensure connection is closed when exception interrupts execution
* Remove Timeout#timeout from socket connection
* Fix infinite retrial loop on HTTP::ConnectionError
* Close sockets on failure, reduce idle time to 90 seconds
* Add MAX_REQUEST_POOL_SIZE option to limit concurrent connections to the same server
* Use a shared pool size, 512 by default, to stay below open file limit
* Add some tests
* Add more tests
* Reduce MAX_IDLE_TIME from 90 to 30 seconds, reap every 30 seconds
* Use a shared pool that returns preferred connection but re-purposes other ones when needed
* Fix wrong connection being returned on subsequent calls within the same thread
* Reduce mutex calls on flushes from 2 to 1 and add test for reaping
* Record account suspend/silence time and keep track of domain blocks
* Also unblock users who were suspended/silenced before dates were recorded
* Add tests
* Keep track of suspending date for users suspended through the CLI
* Show accurate number of accounts that would be affected by unsuspending an instance
* Change migration to set silenced_at and suspended_at
* Revert "Also unblock users who were suspended/silenced before dates were recorded"
This reverts commit a015c65d2d1e28c7b7cfab8b3f8cd5fb48b8b71c.
* Switch from using suspended and silenced to suspended_at and silenced_at
* Add post-deployment migration script to remove `suspended` and `silenced` columns
* Use Account#silence! and Account#suspend! instead of updating the underlying property
* Add silenced_at and suspended_at migration to post-migration
* Change account fabricator to translate suspended and silenced attributes
* Minor fixes
* Make unblocking domains always retroactive
* Prevent silenced local users from notifying remote users not following them
This is an attempt to extend the local restrictions of silenced users to the
federation.
* Add tests
* Add tests for making sure private status don't get sent over OStatus
* create account_identity_proofs table
* add endpoint for keybase to check local proofs
* add async task to update validity and liveness of proofs from keybase
* first pass keybase proof CRUD
* second pass keybase proof creation
* clean up proof list and add badges
* add avatar url to keybase api
* Always highlight the “Identity Proofs” navigation item when interacting with proofs.
* Update translations.
* Add profile URL.
* Reorder proofs.
* Add proofs to bio.
* Update settings/identity_proofs front-end.
* Use `link_to`.
* Only encode query params if they exist.
URLs without params had a trailing `?`.
* Only show live proofs.
* change valid to active in proof list and update liveness before displaying
* minor fixes
* add keybase config at well-known path
* extremely naive feature flagging off the identity proof UI
* fixes for rubocop
* make identity proofs page resilient to potential keybase issues
* normalize i18n
* tweaks for brakeman
* remove two unused translations
* cleanup and add more localizations
* make keybase_contacts an admin setting
* fix ExternalProofService my_domain
* use Addressable::URI in identity proofs
* use active model serializer for keybase proof config
* more cleanup of keybase proof config
* rename proof is_valid and is_live to proof_valid and proof_live
* cleanup
* assorted tweaks for more robust communication with keybase
* Clean up
* Small fixes
* Display verified identity identically to verified links
* Clean up unused CSS
* Add caching for Keybase avatar URLs
* Remove keybase_contacts setting
* Filter incoming Announce activities by relation to local activity
Reject if announcer is not followed by local accounts, and is not
from an enabled relay, and the object is not a local status
Follow-up to #10005
* Fix tests
* When self-boosting, embed original toot into Announce serialization
* Process unknown self-boosts from Announce object if it is more than an URI
* Add some self-boost specs
* Only serialize private toots in self-Announces
* Ensure blocked user unfollows blocker if Block/Undo Block are processed out of order
* Add specs for Block causing unfollow and for out-of-order Block + Undo
* Fix connect timeout not being enforced
The loop was catching the timeout exception that should stop execution, so the next IP would no longer be within a timed block, which led to requests taking much longer than 10 seconds.
* Use timeout on each IP attempt, but limit to 2 attempts
* Fix code style issue
* Do not break Request#perform if no block given
* Update method stub in spec for Request
* Move timeout inside the begin/rescue block
* Use Resolv::DNS with timeout of 1 to get IP addresses
* Update Request spec to stub Resolv::DNS instead of Addrinfo
* Fix Resolve::DNS stubs in Request spec
* Add silent column to mentions
* Save silent mentions in ActivityPub Create handler and optimize it
Move networking calls out of the database transaction
* Add "limited" visibility level masked as "private" in the API
Unlike DMs, limited statuses are pushed into home feeds. The access
control rules between direct and limited statuses is almost the same,
except for counter and conversation logic
* Ensure silent column is non-null, add spec
* Ensure filters don't check silent mentions for blocks/mutes
As those are "this person is also allowed to see" rather than "this
person is involved", therefore does not warrant filtering
* Clean up code
* Use Status#active_mentions to limit returned mentions
* Fix code style issues
* Use Status#active_mentions in Notification
And remove stream_entry eager-loading from Notification
updates some "context" and "it" lines to have clearer explanations
updates "context" lines to properly describe function input, and "it" lines to describe results
If the input text is blank after preparation (only mention, or
only URL, or empty as in a media post), then use nil as language,
since it's OK to show to everyone.
Otherwise, always fall back to the server's default locale
* Add keyword filtering
GET|POST /api/v1/filters
GET|PUT|DELETE /api/v1/filters/:id
- Irreversible filters can drop toots from home or notifications
- Other filters can hide toots through the client app
- Filters use a phrase valid in particular contexts, expiration
* Make sure expired filters don't get applied client-side
* Add missing API methods
* Remove "regex filter" from column settings
* Add tests
* Add test for FeedManager
* Add CustomFilter test
* Add UI for managing filters
* Add streaming API event to allow syncing filters
* Fix tests
* No need to re-require sidekiq plugins, they are required via Gemfile
* Add derailed_benchmarks tool, no need to require TTY gems in Gemfile
* Replace ruby-oembed with FetchOEmbedService
Reduce startup by 45382 allocated objects
* Remove preloaded JSON-LD in favour of caching HTTP responses
Reduce boot RAM by about 6 MiB
* Fix tests
* Fix test suite by stubbing out JSON-LD contexts
* Enable updating additional account information from user preferences via rest api
Resolves#6553
* Pacify rubocop
* Decoerce incoming settings in UserSettingsDecorator
* Create user preferences hash directly from incoming credentials instead of going through ActionController::Parameters
* Clean up user preferences update
* Use ActiveModel::Type::Boolean instead of manually checking stringified number equivalence
to_s method of HTTP::Response keeps blocking while it receives the whole
content, no matter how it is big. This means it may waste time to receive
unacceptably large files. It may also consume memory and disk in the
process. This solves the inefficency by checking response length while
receiving.
HTTP connections must be explicitly closed in many cases, and letting
perform method close connections makes its callers less redundant and
prevent them from forgetting to close connections.
* request: in the event of failure, try other IPs (#6761)
In the case where a name has multiple A/AAAA records, we should
try subsequent records instead of immediately failing when we have a
failure on the first IP address.
This significantly improves delivery success when there are network
connectivity problems affecting only IPv4 or IPv6.
* fix method call style
* request_spec: adjust test case to use Addrinfo
* request: Request/open: move private addr check to within begin/rescue
* request_spec: add case to test failover, fix exception check
* Double Addrinfo.foreach so that it correctly yields instances
A complemental change for precompute_feed_service_spec.rb also fixes its
random failure which is caused by the Snowlake randomization of the order
of an original status and its reblog.
* Fix actors accepting invalid URI schemes or different host between URI and URL
* Fix statuses accepting invalid URI scheme or different host to actor
* Adjust tests to new requirements
* Improve readability of mismatching_origin?/invalid_origin? methods
* Don't normalize URLs in toots
URL normalization is ill-defined and may cause certain links to break.
* Change specs since we are not normalizing user-provided URLs
* Sanitize classlist properly
* Actually properly sanitize every class after the first
* Improve Formatter spec to check for multiple classes and non-space whitespace
* Avoid sending explicit Undo->Announce when original deleted
* Do not forward a reply back to the server that sent it
* Deduplicate inboxes of rebloggers' followers for delete forwarding
* Adjust test
* Fix wrong class, bad SQL, wrong variable, outdated comment
* Allow hiding of reblogs from followed users
This adds a new entry to the account menu to allow users to hide
future reblogs from a user (and then if they've done that, to show
future reblogs instead).
This does not remove or add historical reblogs from/to the user's
timeline; it only affects new statuses.
The API for this operates by sending a "reblogs" key to the follow
endpoint. If this is sent when starting a new follow, it will be
respected from the beginning of the follow relationship (even if
the follow request must be approved by the followee). If this is
sent when a follow relationship already exists, it will simply
update the existing follow relationship. As with the notification
muting, this will now return an object ({reblogs: [true|false]}) or
false for each follow relationship when requesting relationship
information for an account. This should cause few issues due to an
object being truthy in many languages, but some modifications may
need to be made in pickier languages.
Database changes: adds a show_reblogs column (default true,
non-nullable) to the follows and follow_requests tables. Because
these are non-nullable, we use the existing MigrationHelpers to
perform this change without locking those tables, although the
tables are likely to be small anyway.
Tests included.
See also <https://github.com/glitch-soc/mastodon/pull/212>.
* Rubocop fixes
* Code review changes
* Test fixes
This patchset closes#648 and resolves#3271.
* Rubocop fix
* Revert reblogs defaulting in argument, fix tests
It turns out we needed this for the same reason we needed it in muting:
if nil gets passed in somehow (most usually by an API client not passing
any value), we need to detect and handle it.
We could specify a default in the parameter and then also catch nil, but
there's no great reason to duplicate the default value.
* Add structure for lists
* Add list timeline streaming API
* Add list APIs, bind list-account relation to follow relation
* Add API for adding/removing accounts from lists
* Add pagination to lists API
* Add pagination to list accounts API
* Adjust scopes for new APIs
- Creating and modifying lists merely requires "write" scope
- Fetching information about lists merely requires "read" scope
* Add test for wrong user context on list timeline
* Clean up tests
* Clean up reblog-tracking sets from FeedManager
Builds on #5419, with a few minor optimizations and cleanup of sets
after they are no longer needed.
* Update tests, fix multiply-reblogged case
Previously, we would have lost the fact that a given status was
reblogged if the displayed reblog of it was removed, now we don't.
Also added tests to make sure FeedManager#trim cleans up our reblog
tracking keys, fixed up FeedCleanupScheduler to use the right loop,
and fixed the test for it.
* Keep references to all reblogs of a status on home feed
When inserting reblog: Add to set of reblogs of this status on
the feed, if original status was present in the feed, add it to
that set as well.
When removing a reblog: Remove it from that set. Take random
remaining item from the set. If one exists, re-insert it into feed,
otherwise do not re-insert anything.
Fix#4210
* When original is removed, toss out reblog references
We've changed un-reblogging behavior when we implement Snowflake, to insert un-reblogged status at the position reblogging status existed.
However, our API expects home timeline is ordered by status ids, and max_id/since_id filters by zset score. Due to this, un-reblogged status appears as a last item of result set, and timeline expansion may skips many statuses.
So this reverts that change...reblogged status inserted at corresponding position to its id.
* Use non-serial IDs
This change makes a number of nontrivial tweaks to the data model in
Mastodon:
* All IDs are now 8 byte integers (rather than mixed 4- and 8-byte)
* IDs are now assigned as:
* Top 6 bytes: millisecond-resolution time from epoch
* Bottom 2 bytes: serial (within the millisecond) sequence number
* See /lib/tasks/db.rake's `define_timestamp_id` for details, but
note that the purpose of these changes is to make it difficult to
determine the number of objects in a table from the ID of any
object.
* The Redis sorted set used for the feed will have values used to look
up toots, rather than scores. This is almost always the same as the
existing behavior, except in the case of boosted toots. This change
was made because Redis stores scores as double-precision floats,
which cannot store the new ID format exactly. Note that this doesn't
cause problems with sorting/pagination, because ZREVRANGEBYSCORE
sorts lexicographically when scores are tied. (This will still cause
sorting issues when the ID gains a new significant digit, but that's
extraordinarily uncommon.)
Note a couple of tradeoffs have been made in this commit:
* lib/tasks/db.rake is used to enforce many/most column constraints,
because this commit seems likely to take a while to bring upstream.
Enforcing a post-migrate hook is an easier way to maintain the code
in the interim.
* Boosted toots will appear in the timeline as many times as they have
been boosted. This is a tradeoff due to the way the feed is saved in
Redis at the moment, but will be handled by a future commit.
This would effectively close Mastodon's #1059, as it is a
snowflake-like system of generating IDs. However, given how involved
the changes were simply within Mastodon, it may have unexpected
interactions with some clients, if they store IDs as doubles
(or as 4-byte integers). This was a problem that Twitter ran into with
their "snowflake" transition, particularly in JavaScript clients that
treated IDs as JS integers, rather than strings. It therefore would be
useful to test these changes at least in the web interface and popular
clients before pushing them to all users.
* Fix JavaScript interface with long IDs
Somewhat predictably, the JS interface handled IDs as numbers, which in
JS are IEEE double-precision floats. This loses some precision when
working with numbers as large as those generated by the new ID scheme,
so we instead handle them here as strings. This is relatively simple,
and doesn't appear to have caused any problems, but should definitely
be tested more thoroughly than the built-in tests. Several days of use
appear to support this working properly.
BREAKING CHANGE:
The major(!) change here is that IDs are now returned as strings by the
REST endpoints, rather than as integers. In practice, relatively few
changes were required to make the existing JS UI work with this change,
but it will likely hit API clients pretty hard: it's an entirely
different type to consume. (The one API client I tested, Tusky, handles
this with no problems, however.)
Twitter ran into this issue when introducing Snowflake IDs, and decided
to instead introduce an `id_str` field in JSON responses. I have opted
to *not* do that, and instead force all IDs to 64-bit integers
represented by strings in one go. (I believe Twitter exacerbated their
problem by rolling out the changes three times: once for statuses, once
for DMs, and once for user IDs, as well as by leaving an integer ID
value in JSON. As they said, "If you’re using the `id` field with JSON
in a Javascript-related language, there is a very high likelihood that
the integers will be silently munged by Javascript interpreters. In most
cases, this will result in behavior such as being unable to load or
delete a specific direct message, because the ID you're sending to the
API is different than the actual identifier associated with the
message." [1]) However, given that this is a significant change for API
users, alternatives or a transition time may be appropriate.
1: https://blog.twitter.com/developer/en_us/a/2011/direct-messages-going-snowflake-on-sep-30-2011.html
* Restructure feed pushes/unpushes
This was necessary because the previous behavior used Redis zset scores
to identify statuses, but those are IEEE double-precision floats, so we
can't actually use them to identify all 64-bit IDs. However, it leaves
the code in a much better state for refactoring reblog handling /
coalescing.
Feed-management code has been consolidated in FeedManager, including:
* BatchedRemoveStatusService no longer directly manipulates feed zsets
* RemoveStatusService no longer directly manipulates feed zsets
* PrecomputeFeedService has moved its logic to FeedManager#populate_feed
(PrecomputeFeedService largely made lots of calls to FeedManager, but
didn't follow the normal adding-to-feed process.)
This has the effect of unifying all of the feed push/unpush logic in
FeedManager, making it much more tractable to update it in the future.
Due to some additional checks that must be made during, for example,
batch status removals, some Redis pipelining has been removed. It does
not appear that this should cause significantly increased load, but if
necessary, some optimizations are possible in batch cases. These were
omitted in the pursuit of simplicity, but a batch_push and batch_unpush
would be possible in the future.
Tests were added to verify that pushes happen under expected conditions,
and to verify reblog behavior (both on pushing and unpushing). In the
case of unpushing, this includes testing behavior that currently leads
to confusion such as Mastodon's #2817, but this codifies that the
behavior is currently expected.
* Rubocop fixes
I could swear I made these changes already, but I must have lost them
somewhere along the line.
* Address review comments
This addresses the first two comments from review of this feature:
https://github.com/tootsuite/mastodon/pull/4801#discussion_r139336735https://github.com/tootsuite/mastodon/pull/4801#discussion_r139336931
This adds an optional argument to FeedManager#key, the subtype of feed
key to generate. It also tests to ensure that FeedManager's settings are
such that reblogs won't be tracked forever.
* Hardcode IdToBigints migration columns
This addresses a comment during review:
https://github.com/tootsuite/mastodon/pull/4801#discussion_r139337452
This means we'll need to make sure that all _id columns going forward
are bigints, but that should happen automatically in most cases.
* Additional fixes for stringified IDs in JSON
These should be the last two. These were identified using eslint to try
to identify any plain casts to JavaScript numbers. (Some such casts are
legitimate, but these were not.)
Adding the following to .eslintrc.yml will identify casts to numbers:
~~~
no-restricted-syntax:
- warn
- selector: UnaryExpression[operator='+'] > :not(Literal)
message: Avoid the use of unary +
- selector: CallExpression[callee.name='Number']
message: Casting with Number() may coerce string IDs to numbers
~~~
The remaining three casts appear legitimate: two casts to array indices,
one in a server to turn an environment variable into a number.
* Only implement timestamp IDs for Status IDs
Per discussion in #4801, this is only being merged in for Status IDs at
this point. We do this in a migration, as there is no longer use for
a post-migration hook. We keep the initialization of the timestamp_id
function as a Rake task, as it is also needed after db:schema:load (as
db/schema.rb doesn't store Postgres functions).
* Change internal streaming payloads to stringified IDs as well
This is equivalent to 591a9af356faf2d5c7e66e3ec715502796c875cd from
#5019, with an extra change for the addition to FeedManager#unpush.
* Ensure we have a status_id_seq sequence
Apparently this is not a given when specifying a custom ID function,
so now we ensure it gets created. This uses the generic version of this
function to more easily support adding additional tables with timestamp
IDs in the future, although it would be possible to cut this down to a
less generic version if necessary. It is only run during db:schema:load
or the relevant migration, so the overhead is extraordinarily minimal.
* Transition reblogs to new Redis format
This provides a one-way migration to transition old Redis reblog entries
into the new format, with a separate tracking entry for reblogs.
It is not invertible because doing so could (if timestamp IDs are used)
require a database query for each status in each users' feed, which is
likely to be a significant toll on major instances.
* Address review comments from @akihikodaki
No functional changes.
* Additional review changes
* Heredoc cleanup
* Run db:schema:load hooks for test in development
This matches the behavior in Rails'
ActiveRecord::Tasks::DatabaseTasks.each_current_configuration, which
would otherwise break `rake db:setup` in development.
It also moves some functionality out to a library, which will be a good
place to put additional related functionality in the near future.
* Add emoji autosuggest
Some credit goes to glitch-soc/mastodon#149
* Remove server-side shortcode->unicode conversion
* Insert shortcode when suggestion is custom emoji
* Remove remnant of server-side emojis
* Update style of autosuggestions
* Fix wrong emoji filenames generated in autosuggest item
* Do not lazy load emoji picker, as that no longer works
* Fix custom emoji autosuggest
* Fix multiple "Custom" categories getting added to emoji index, only add once
* Custom emoji
- In OStatus: `<link rel="emoji" name="coolcat" href="http://..." />`
- In ActivityPub: `{ type: "Emoji", name: ":coolcat:", href: "http://..." }`
- In REST API: Status object includes `emojis` array (`shortcode`, `url`)
- Domain blocks with reject media stop emojis
- Emoji file up to 50KB
- Web UI handles custom emojis
- Static pages render custom emojis as `<img />` tags
Side effects:
- Undo #4500 optimization, as I needed to modify it to restore
shortcode handling in emojify()
- Formatter#plaintext should now make sure stripped out line-breaks
and paragraphs are replaced with newlines
* Fix emoji at the start not being converted
* Fix ActivityPub handling of replies when LOCAL_DOMAIN ≠ WEB_DOMAIN (#4895)
For all intents and purposes, `local_url?` is used to check if an URL refers
to the Web UI or the various API endpoints of the local instances. Those things
reside on `WEB_DOMAIN` and not `LOCAL_DOMAIN`.
* Change local_url? spec, as all URLs handled by Mastodon are based on WEB_DOMAIN
In before, the method uses stream_entry id as status id, so replied status was wrongly selected.
This PR uses StatusFinder which was introduced with `Api::Web::EmbedsController`.
* Decouple Status#local? from uri being nil
* Replace on-the-fly URI generation with stored URIs
- Generate URI in after_save hook for local statuses
- Use static value in TagManager when available, fallback to tag format
- Make TagManager use ActivityPub::TagManager to understand new format
- Adjust tests
* Use other heuristic for locality of old statuses, do not perform long query
* Exclude tombstone stream entries from Atom feed
* Prevent nil statuses from landing in Pubsubhubbub::DistributionWorker
* Fix URI not being saved (#4818)
* Add more specs for Status
* Save generated uri immediately
and also fix method order to minimize diff.
* Fix alternate HTML URL in Atom
* Fix tests
* Remove not-null constraint from statuses migration to speed it up
* Raise an error for remote url in StatusFinder
Previous implementation had allowed remote url with status id which also exists on local.
Then that bug leads /api/web/embed to return wrong embed url.
* Fix oembed_controller_spec
- Use statuses controller for embeds instead of stream entries controller
- Prefer /@:username/:id/embed URL for embeds
- Use /@:username as author_url in OEmbed
- Add follow link to embeds which opens web intent in new window
- Use redis cache in development
- Cache entire embed
* Add handling of Linked Data Signatures in payloads
* Add a way to sign JSON, fix canonicalization of signature options
* Fix signatureValue encoding, send out signed JSON when distributing
* Add missing security context
* Add ActivityPub inbox
* Handle ActivityPub deletes
* Handle ActivityPub creates
* Handle ActivityPub announces
* Stubs for handling all activities that need to be handled
* Add ActivityPub actor resolving
* Handle conversation URI passing in ActivityPub
* Handle content language in ActivityPub
* Send accept header when fetching actor, handle JSON parse errors
* Test for ActivityPub::FetchRemoteAccountService
* Handle public key and icon/image when embedded/as array/as resolvable URI
* Implement ActivityPub::FetchRemoteStatusService
* Add stubs for more interactions
* Undo activities implemented
* Handle out of order activities
* Hook up ActivityPub to ResolveRemoteAccountService, handle
Update Account activities
* Add fragment IDs to all transient activity serializers
* Add tests and fixes
* Add stubs for missing tests
* Add more tests
* Add more tests
* Use the same emoji data on the frontend and backend
* Move emoji.json to repository, add tests
This way you don't need to install node dependencies if you only
want to run Ruby code