* start messing about with different mp4 metadata extraction
* heyyooo it works
* add test cow
* move useful multierror to gtserror package
* error out if video doesn't seem to be a real mp4
* test parsing mkv in disguise as mp4
* tidy up error handling
* remove extraneous line
* update framerate formatting
* use float32 for aspect
* fixy mctesterson
* close pipereader on failed data function
* gently slurp the bytes
* readability updates
* go fmt
* tidy up file server tests + add more cases
* start moving io wrappers to separate iotools package. Remove use of buffering while piping recache stream
Signed-off-by: kim <grufwub@gmail.com>
* add license text
Signed-off-by: kim <grufwub@gmail.com>
Co-authored-by: kim <grufwub@gmail.com>
* use readcloser for content.Content
* call media postdata function no matter what
* return a readcloser from data func
* tidy of logic of readertostore
* fix whoopsie
* select emoji using image_static_url
* use updated on AP emojis
* allow refetch of updated emojis
* cheeky workaround for test
* clean up old files for refreshed emoji
* check error for originalPostData
* shorten GetEmojiByStaticImageURL
* delete kirby (sorry nintendo)
* use 'test' value for testrig storage backend
* update test dependency
* add WaitFor func in testrig
* use WaitFor function instead of time.Sleep
* tidy up tests
* make SentMessages a sync.map
* go fmt
* feat: vendor minio client
* feat: introduce storage package with s3 support
* feat: serve s3 files directly
this saves a lot of bandwith as the files are fetched from the object
store directly
* fix: use explicit local storage in tests
* feat: integrate s3 storage with the main server
* fix: add s3 config to cli tests
* docs: explicitly set values in example config
also adds license header to the storage package
* fix: use better http status code on s3 redirect
HTTP 302 Found is the best fit, as it signifies that the resource
requested was found but not under its presumed URL
307/TemporaryRedirect would mean that this resource is usually located
here, not in this case
303/SeeOther indicates that the redirection does not link to the
requested resource but to another page
* refactor: use context in storage driver interface
* update media manager to use internal/worker package, update worker with better logging
Signed-off-by: kim <grufwub@gmail.com>
* fix Queue() trace log message format operators
Signed-off-by: kim <grufwub@gmail.com>
* update media manager comment to match updated worker implementation
Signed-off-by: kim <grufwub@gmail.com>
* add png stripping code from google/wuffs
* experiment with stripping data from pngs
* add test images
* use StrippedPngDecode for pngs
* add StrippedPngDecode func
* update tests for (no)alphachannel pngs
* nolint on copied function
* Add whereNotEmptyAndNotNull
* Add GetRemoteOlderThanDays
* Add GetRemoteOlderThanDays
* Add PruneRemote to Manager interface
* Start implementing PruneRemote
* add new attachment + status to tests
* fix up and test GetRemoteOlderThan
* fix bad import
* PruneRemote: return number pruned
* add Cached column to mediaattachment
* update + test pruneRemote
* update mediaTest
* use Cached column
* upstep bun to latest version
* embed structs in mediaAttachment
* migrate mediaAttachment to new format
* don't default cached to true
* select only remote media
* update db dependencies
* step bun back to last working version
* update pruneRemote to use Cached field
* fix storage path of test attachments
* add recache logic to manager
* fix trimmed aspect ratio
* test prune and recache
* return errwithcode
* tidy up different paths for emoji vs attachment
* fix incorrect thumbnail type being stored
* expose TransportController to media processor
* implement tee-ing recached content
* add thoughts of dog to test fedi attachments
* test get remote files
* add comment on PruneRemote
* add postData cleanup to recache
* test thumbnail fetching
* add incredible diagram
* go mod tidy
* buffer pipes for recache streaming
* test for client stops reading after 1kb
* add media-remote-cache-days to config
* add cron package
* wrap logrus so it's available to cron
* start and stop cron jobs gracefully