Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)PU
pe1uca @lemmy.pe1uca.dev
Posts 46
Comments 319
How 0% APR credits make money?
  • Oh, I was only aware of credits where the lender sets the amount to be the total exactly spread over the period, those are the only ones I've seen and taken, so each month I get a charge for the amount needed to keep up with the credit.
    For the rest then it makes sense how they make money, since I've had credit cards which don't show or at the very least hide the amount to not pay interest and only tell you the minimum payment.

  • How 0% APR credits make money?

    I mean, the price of the product is the same, I'm taking a loan for the duration of the credit but paying no interest? What's the catch? I can keep my money making a bit of interest instead of giving it right away and without increasing the price of what I was already planning to buy. When or why wouldn't I choose 0% credits?

    26
    is there a way to track my location?
  • All the ones I've used require a separate service to actually do the query.
    You can use traccar, owntracks, or wanderer (this one is not realtime tho, and requires for you to find an app to send the data).
    There's also gpslogger which can record everything locally (or send it to any URL you set), but you need another app or service to be able to query it properly.

  • Mozilla Acquires Anonym, Pioneering Privacy in Digital Ads
  • My unpopular opinion is I like ads, some are well thought, funny, and memorable.
    Ads in videogames which allow you to have a small boost are also amazing, I don't have to spend money, just leave my phone for 30~60 seconds and I get a bit of premium currency while supporting the devs.

    The annoying/worrisome part is all the tracking the ads have, and the ones which are very invasive which take half of the screen.
    If we could go back to TV ads where everyone watches the ads without individual targeting, and with current technology to protect against hacking, and getting them in sensible places to not hide the content I would place and exception in my ublock and pihole for them.

  • Tools and ideas for backup of many files
  • In that case I'd recommen you use immich-go to upload them and still backup only immich instead of your original folder, since if something happens to your immich library you'd have to manually recreate it because immich doesn't update its db from the file system.
    There was a discussion in github about worries of data being compressed in immich, but it was clarified the uploaded files are saved as they are and only copies are modified, so you can safely backup its library.

    I'm not familiar with RAID, but yeah, I've also read its mostly about up time.

    I'd also recommend you look at restic and duplocati.
    Both are backup tools, restic is a CLI and duplocati is a service with an ui.
    So if you want to create the crons go for restic.
    Tho if you want to be able to read your backups manually maybe check how the data is stored, because I'm using duplicati and it saves it in files that need to be read by duplicati, I'm not sure if I could go and easily open them unlike the data copied with rsync.

  • *Permanently Deleted*
  • Unless they've changed how it works I can confirm.
    Some months ago I was testing lemmy in my local I used the same URL to create a new post, it never showed up in the ui, it was because Lemmy treated it as a crosspost and hid it under the older one.
    At that time it was only a crosspost jf the URL was the same, I'm not so sure about the title, but the body could be different.

    The thing would be to verify if this grouping is being done by the UI or by the server, which might explain some UIs showing duplicated posts.

  • Tools and ideas for backup of many files
  • For local backups I use this command

    $ rsync --update -ahr --no-i-r --info=progress2 /source /dest
    

    You could first compress them, but since I have the space for the important stuff, this is the only command I need.

    Recently I also made a migration similar to yours.

    I've read jellyfin is hard to migrate, so I just reinstalled it and manually recreated the libraries, I didn't mind about the watch history and other stuff.
    IIRC there's a post or github repo with a script to try to migrate jellyfin.

    For immich you just have to copy this database files with the same command above and that's it (of course with the stack down, you don't want to copy db files while the database is running).
    For the library I already had it in an external drive with a symlink, so I just had to mount it in the new machine and create a simlar symlink.

    I don't run any *arr so I don't know how they'd be handled.
    But I did do the migrarion of syncthing and duplicati.
    For syncthing I just had to find the config path and I copied it with the same command above.
    (You might need to run chown in the new machine).

    For duplicati it was easier since it provides a way to export and import the configurations.

    So depending on how the *arr programs handle their files it can be as easy as find their root directory and rsync it.
    Maybe this could also be done for jellyfin.
    Of course be sure to look for all config folders they need, some programs might split them into their working directory, into ~/.config, or ./.local, or /etc, or any other custom path.

    EDIT: for jellyfin data, evaluate how hard to find is, it might be difficult, but if it's possible it doesn't require the same level of backups as your immich data, because immich normally holds data you created and can't be found anywhere else.

    Most series I have them in just the main jellyfin drive.
    But immich is backedup with 3-2-1, 3 copies of the data (I actually have 4), in at least 2 types of media (HDD and SSD), with 1 being offsite (rclone encrypted into e2 drive)

  • I updated wanderer (v0.6.1) - a self-hosted trail and GPS track database
  • Just tried it and seems too complicated haha. With traccar I just had to deploy a single service and use either the official app or previously gpslogger sending the data to an endpoint.

    With owntracks the main documentation seems to be deploy it into the base system, docker is kind of hidden.
    And with docker you need to deploy at least 3 services: recorder, Mosquitto, and the front end.
    The app doesn't tell you what's expected to be filled into the fields to connect to the backend. I tried with https but haven't been able to make it work.

    To be fair, this has been just today. But as long as a service has a docker compose I've always been able to deploy it in less than 10 minutes, and the rest of the day is just customizing the service.

  • I updated wanderer (v0.6.1) - a self-hosted trail and GPS track database
  • It looks amazing!

    How well fitted would this be for a Google maps timeline replacement?

    I see you mention we need to upload the files which maybe could be obtained from an app like https://github.com/mendhak/gpslogger
    I already had a flow to have them on my server with syncthing, so I could easily use your api to process them.

    The thing would be to have each trail be marked as each day and have a way of showing them nicely (I haven't tested everything in the demo hehe).

    Is there a plan to be able to process any GPS standard to automatically generate the trails?

    I'm currently using traccar, but it looks more like a fleet management than something to remember where you've been.

  • NAS, Home Servers, and where do I even start?
  • I can share you a bit my journey and setups so maybe you can take a better decision.

    About point 1:

    In vultr with the second smallest shared CPU (1vCPU, 2GB RAM) several of my services have been running fine for years now:
    invidious, squid proxy, TODO app (vikunja), bookmarks (grimoire), key-value storage (kinto), git forge (forgejo) with CI/CD (forgejo actions), freshrss, archival (archive-box), GPS tracker (traccar), notes (trilium), authentication (authelia), monitoring (munin).
    The thing is since I'm the only one using them usually only one or two services receive considerable usage, and I'm kind of patient so if something takes 1 minute instead of 10 seconds I'm fine with it. This is rare to happen, maybe only forgejo actions or the archival.

    In my main pc I was hosting some stuff too: immich, jellyfin, syncthing, and duplicati.

    Just recently bought this minipc https://aoostar.com/products/aoostar-r7-2-bay-nas-amd-ryzen-7-5700u-mini-pc8c-16t-up-to-4-3ghz-with-w11-pro-ddr4-16gb-ram-512gb-nvme-ssd
    (Although I bought it from amazon so I didn't had to handle the import.)

    Haven't moved anything off of the VPS, but I think this will be enough for a lot of stuff I have because of the specs of the VPS.
    The ones I've moved are the ones from my main PC.
    Transcoding for jellyfin is not an issue since I already preprocessed my library to the formats my devices accept, so only immich could cause issues when uploading my photos.

    Right now the VPS is around 0.3 CPU, 1.1/1.92GB RAM, 2.26/4.8GB swap.
    The minipc is around 2.0CPU (most likely because duplicati is running right now), 3/16GB RAM, no swap.

    There are several options for minipc even with potential to upgrade ram and storage like the one I bought.
    Here's a spreadsheet I found with very good data on different options so you can easily compare them and find something that matches your needs https://docs.google.com/spreadsheets/d/1SWqLJ6tGmYHzqGaa4RZs54iw7C1uLcTU_rLTRHTOzaA/edit
    (Here's the original post where I found it https://www.reddit.com/r/MiniPCs/comments/1afzkt5/2024_general_mini_pc_guide_usa/ )

    For storage I don't have any comments since I'm still using a 512GB nvme and a 1TB external HDD, the minipc is basically my start setup for having a NAS which I plan to fill with drives when I find any in sale (I even bought it without ram and storage since I had spare ones).

    But I do have some huge files around, they are in https://www.idrive.com/s3-storage-e2/
    Using rclone I can easily have it mounted like any other drive and there's no need to worry of being on the cloud since rclone has an encrypt option.
    Of course this is a temporary solution since it's cheaper to buy a drive for the long term (I also use it for my backups tho)

    About point 2:

    If you go the route of using only linux sshfs is very easy to use, I can easily connect from the files app or mount it via fstab. And for permissions you can easily manage everything with a new user and ACLs.

    If you need to access it from windows I think your best bet will be to use samba, I think there are several services for this, I was using OpenMediaVault since it was the only one compatible with ARM when I was using a raspberry pi, but when you install it it takes over all your net interfaces and disables wifi, so you have to connect via ethernet to re-enable it.

    About point 3:

    In the VPS I also had pihole and searxng, but I had to move those to a separate instance since if I had something eating up the resources browsing internet was a pain hehe.

    Probably my most critical services will remain in the VPS (like pihole, searxng, authelia, squid proxy, GPS tracker) since I don't have to worry about my power or internet going down or something that might prevent me from fixing stuff or from my minipc being overloaded with tasks that browsing the internet comes to a crawl (specially since I also ran stuff like whispercpp and llamacpp which basically makes the CPU unusable for a bit :P ).

    About point 4:

    To access everything I use tailscale and I was able to close all my ports while still being able to easily access everything in my main or mini pc without changing anything in my router.

    If you need to give access to someone I'd advice for you to share your pihole node and the machine running the service.
    And in their account a split DNS can be setup to only let them handle your domains by your pihole, everything else can still be with their own DNS.

    If this is not possible and you need your service open on the internet I'd suggest having a VPS with a reverse proxy running tailscale so it can communicate with your service when it receive the requests while still not opening your lan to the internet.
    Another option is tailscale funnel, but I think you're bound to the domain they give you. I haven't tried it so you'd need to confirm.

  • How do I migrate a Lemmy account?
  • Ah, then no, the last thing I knew about it you can't migrate accounts from one server to another, which is what you're trying to do here.
    As I mentioned if you were able to move the keys which identify your account it would be easy for someone to impersonate you.
    Also, your public keys are shared among all the instances you've interacted, so this might break your interactions there.

  • How do I migrate a Lemmy account?
  • Do you still have the old database? You should be able to move your instance around as long as you have a dump of your DB, that's where all the keys of each community and user in your instance are. Those are the ones telling other instances you're actually you, if you loose those I don't know what can be done so other instances flush your old content and treat you as a new account. But I would count on thi s being a feature since it could lead to people impersonating someone else if they get a hold of the domain without the DB.

    EDIT: amm, maybe I didn't understand correctly, are you trying to move to a new domain? Or to a new server with the same domain?
    What's re-home?

  • Is there any reason the fediverse doesn't use torrents to not ddos itself?
  • Yeah, I just searched a bit and found this https://stackoverflow.com/questions/28348678/what-exactly-is-the-info-hash-in-a-torrent-file

    The torrent file contains the hashes of each piece and the hash of the information about the files with the hashes of the pieces, so they have complete validation of the content and amount of files being received.
    I wonder if the clients only validate when receiving or also when sending the data, this way maybe the seeding can be stopped if the file has been corrupted instead of relaying on the tracker or other clients to distrust someone that made a mistake like the OP of that post.

  • Is there any reason the fediverse doesn't use torrents to not ddos itself?
  • How torrents validate the files being served?

    Recently I read a post where OP said they were transcoding torrents in place and still seeding them, so their question was if this was possible since the files were actually not the same anymore.
    A comment said yes, the torrent was being seeded with the new files and they were "poisoning" the torrent.

    So, how this can be prevented if torrents were implemented as a CDN?
    An in general, how is this possible? I thought torrents could only use the original files, maybe with a hash, and prevent any other data being sent.

  • Best way to backup files
  • I also like local only with a similar set up as yours, rsync to and HDD and to an SSD.
    But I also would recommend you to follow that suggestion, you need to have an external backup managed by someone else (encrypted, of course) so you can have options if anything happens to everything in your local.
    It's up to you how much you're willing to pay to be sure to be able to retrieve your data.

    I'm using iDrive e2, it says it has a limited offer, but it's been there for over a year.

    Im basically paying $1.25 for 2TB per month (it's charged at once for 24 months) https://www.idrive.com/s3-storage-e2/pricing

  • Self-hosted diary
  • A note taking app can be turned into a diary app if you only create notes for each day.
    Even better if you want to then expand a section of a diary entry without actually modifying it nor jumping between apps.

    Obsidian can easily help you tag and link each note and theme/topic in each of them.
    There are several plugins for creating daily notes which will be your diary entries.
    Also it's local only, you can pair it with any sync service, the obsidian provided one, git, any cloud storage, or ones which work directly with the files like syncthing.

    Just curious, what are the special features you expect from a diary service/app which a note taking one doesn't have?

  • If we get two sets of chromosomes, how does our body decide which genes to use?
  • Yes, each sperm and egg are unique since the process they go through ensures the chromosomes have been mixed.

    Both sex cells (gametes) go through meiosis.

    shuffles the genes between the two chromosomes in each pair (one received from each parent), producing lots of recombinant chromosomes with unique genetic combinations in every gamete [...] produces four genetically unique cells, each with half the number of chromosomes as in the parent

  • If we get two sets of chromosomes, how does our body decide which genes to use?
  • You get half of your chromosomes from each of your parents, so their bodies are in charge of setting which half their child will use.
    Afterwards which trait will be present goes into dominant and recessive genes.
    (of course this is more complicated and someone might do a better job at explaining it in depth)

  • datahoarder @lemmy.ml pe1uca @lemmy.pe1uca.dev

    What are you average file sizes for movies and series?

    I'm looking at my library and I'm wondering if I should process some of it to reduce the size of some files.

    There are some movies in 720p that are 1.6~1.9GB each. And then there are some at the same resolution but are 2.5GB. I even have some in 1080p which are just 2GB. I only have two movies in 4k, one is 3.4GB and the other is 36.2GB (can't really tell the detail difference since I don't have 4k displays)

    And then there's an anime I have twice at the same resolution, one set of files are around 669~671MB, the other set 191 each (although in this the quality is kind of noticeable while playing them, as opposed to the other files I extract some frames)

    What would you do? what's your target size for movies and series? What bitrate do you go for in which codec?

    Not sure if it's kind of blasphemy in here talking about trying to compromise quality for size, hehe, but I don't know where to ask this. I was planning on using these settings in ffmpeg, what do you think? I tried it in an anime at 1080p, from 670MB to 570MB, and I wasn't able to tell the difference in quality extracting a frame form the input and the output. ffmpeg -y -threads 4 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -i './01.mp4' -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0

    9

    How to conduct a software audit?

    I need to help auditing a project from another team. I got the pointers on what's expected to be checked, but I don't have like templates for documents for what's expected from an audit report which also means I'm not sure what's the usual process to conduct an internal audit. I mean I might as well read the whole repo, but maybe that's too much?

    Any help or pointers on what I need to investigate to get started would be great!

    12

    What is the technical explanation to limit size of sdd to connect?

    cross-posted from: https://lemmy.pe1uca.dev/post/1136490

    > I'm checking this mini pc https://www.acemagic.com/products/acemagic-ad08-intel-core-i9-11900h-mini-pc > > It says the M2 and SATA ports are limited to 2TB, but I can't imagine why that's the case. > Could there be a limit on the motherboard? On the CPU? > If most likely this is done in software (windows) probably it won't matter since I'm planning to switch to linux. > > What I want to avoid is buying it and being unable to use an 8TB drive.

    5

    What is the technical explanation to limit size of sdd to connect?

    I'm checking this mini pc https://www.acemagic.com/products/acemagic-ad08-intel-core-i9-11900h-mini-pc

    It says the M2 and SATA ports are limited to 2TB, but I can't imagine why that's the case. Could there be a limit on the motherboard? On the CPU? If most likely this is done in software (windows) probably it won't matter since I'm planning to switch to linux.

    What I want to avoid is buying it and being unable to use an 8TB drive.

    4

    What's a good use for an edge TPU?

    I started tinkering with frigate and saw the option to use a coral ai device to process the video feeds for object recognition.

    So, I started checking a bit more what else could be done with the device, and everything listed in the site is related to human recognition (poses, faces, parts) or voice recognition.

    In some part I read stable diffusion or LLMs are not an option since they require a lot of ram which these kind of devices lack.

    What other good/interesting uses can these devices have? What are some of your deployed services using these devices for?

    7

    Sharing caddy HTTPS certificates

    I have a few servers running some services using a custom domain I bought some time ago. Each server has its own instance of caddy to handle a reverse proxy. Only one of those servers can actually do the DNS challenge to generate the certificates, so I was manually copying the certificates to each other caddy instance that needed them and using the tls directive for that domain to read the files.

    Just found there are two ways to automate this: shared storage, and on demand certificates. So here's what I did to make it work with each one, hope someone finds it useful.

    Shared storage

    This one is in theory straight forward, you just mount a folder which all caddy instances will use. I went through the route of using sshfs, so I created a user and added acls to allow the local caddy user and the new remote user to write the storage. setfacl -Rdm u:caddy:rwx,d:u:caddy:rwX,o:--- ./ setfacl -Rdm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./ setfacl -Rm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./

    Then on the server which will use the data I just mounted it remote_user@<main_caddy_host>:/path/to/caddy/storage /path/to/local/storage fuse.sshfs noauto,x-systemd.automount,_netdev,reconnect,identityfile=/home/remote_user/.ssh/id_ed25519,allow_other,default_permissions,uid=caddy,gid=caddy 0 0

    And included the mount as the caddy storage { storage file_system /path/to/local/storage }

    On demand

    This one requires a separate service since caddy can't properly serve the file needed to the get_certificate directive

    We could run a service which reads the key and crt files and combines them directly from the main caddy instance, but I went to serve the files and combine them in the server which needs them.

    So, in my main caddy instance I have this: I restrict the access by my tailscale IP, and include the /ask endpoint required by the on demand configuration. ``` @certificate host cert.localhost handle @certificate { @blocked not remote_ip <requester_ip> respond @blocked "Denied" 403

    @ask { path /ask* query domain=my.domain domain=jellyfin.my.domain } respond @ask "" 200

    @askDenied path('/ask*') respond @askDenied "" 404

    root * /path/to/certs @crt { path /cert.crt } handle @crt { rewrite * /wildcard_.my.domain.crt file_server }

    @key { path /cert.key } handle @key { rewrite * /wildcard_.my.domain.key file_server } } ```

    Then on the server which will use the certs I run a service for caddy to make the http request. This also includes another way to handle the /ask endpoint since wildcard certificates are not handled with *, caddy actually asks for each subdomain individually and the example above can't handle wildcard like domain=*.my.domain. ```go package main

    import ( "io" "net/http" "strings"

    "github.com/labstack/echo/v4" )

    func main() { e := echo.New()

    e.GET("/ask", func(c echo.Context) error { if domain := c.QueryParam("domain"); strings.HasSuffix(domain, "my.domain") { return c.String(http.StatusOK, domain) } return c.String(http.StatusNotFound, "") })

    e.GET("/cert.pem", func(c echo.Context) error { crtResponse, err := http.Get("https://cert.localhost/cert.crt") if err != nil { return c.String(http.StatusInternalServerError, "") } crtBody, err := io.ReadAll(crtResponse.Body) if err != nil { return c.String(http.StatusInternalServerError, "") } defer crtResponse.Body.Close() keyResponse, err := http.Get("https://cert.localhost/cert.key") if err != nil { return c.String(http.StatusInternalServerError, "") } keyBody, err := io.ReadAll(keyResponse.Body) if err != nil { return c.String(http.StatusInternalServerError, "") }

    return c.String(http.StatusOK, string(crtBody)+string(keyBody)) })

    e.Logger.Fatal(e.Start(":1323")) } ```

    And in the CaddyFile request the certificate to this service ``` { on_demand_tls { ask http://localhost:1323/ask } }

    *.my.domain { tls { get_certificate http http://localhost:1323/cert.pem } } ```

    1
    datahoarder @lemmy.ml pe1uca @lemmy.pe1uca.dev

    SSD hides contents after a few days

    Seems the SSD sometimes heats up and the content disappears from the device, mostly from my router, sometimes from my laptop. Do you know what I should configure to put the drive to sleep or something similar to reduce the heat?

    I'm starting up my datahoarder journey now that I replaced my internal nvme SSD.

    It's just a 500GB one which I attached to my d-link router running openwrt. I configured it with samba and everything worked fine when I finished the setup. I just have some media files in there, so I read the data from jellyfin.

    After a few days the content disappears, it's not a connection problem from the shared drive, since I ssh into the router and the files aren't shown. I need to physically remove the drive and connect it again. When I do this I notice the somewhat hot. Not scalding, just hot.

    I also tried this connecting it directly to my laptop running ubuntu. In there the drive sometimes remains cool and the data shows up without issue after days. But sometimes it also heats up and the data disappears (this was even when the data was not being used, i.e. I didn't configure jellyfin to read from the drive)

    I'm not sure how I can be sure to let the ssd sleep for periods of time or to throttle it so it can cool off. Any suggestion?

    4

    What's your approach to email aliases?

    I started fiddling with my alias service and started wondering what approach other people might take. Not necessarily the best option but what do you prefer? What are the pros and cons you see with each option?

    Currently I'm using anonaddy and proton, so I have a few options to create aliases.

    • The limited shared domain aliases (from my current subscription level) Probably the only option to not be tracked if it would be unlimited, I'd just have to pay more for the service.
    • Unlimited aliases with a subdomain of the shared domain For example: baked6863.addy.io
    • Unlimited aliases with custom domain.
    • Unlimited aliases with subdomain in custom domain. This is different from the one above since the domain could be used for different things, not dedicated to email.
    • Catch-all with addy. The downside I've read is people could spam any random word, and if then disabled the people that had an incorrect alias wouldn't be able to communicate anymore.
    • Catch-all with proton. Since proton has a limit on how many email addresses you actually have, so when you receive an email to an alias and want to replay to it you'll be doing it from the catch-all address instead of the alias.

    What do you think? What option would you choose?

    2

    How to reload an infinite world from any point using WFC?

    I started delving into world and dungeon generation with different techniques. The one I want to try is wave function collapse.

    There are several videos and repos explaining and showcasing how it works and how it can be used to generate an infinite world.

    One question I have and haven't seen any mention about is, how do I recreate/reload the map from any point other than the original starting one?

    So, AFAIK the algorithm start from a few tiles/pixels in a starting position, or picking their position at random, and then can collapse the rest of the map with the set of rules given to the building blocks, but if these starting tiles/pixels are far away after a player saves, then I can only think about having to start from them again to reach the saved point to be able to show the same world which of course could mean a very long loading screen.

    Maybe the save can include the current seed, but then it can advance differently when the player goes back, which means the algorithm would generate a different portion of the map. How can I ensure the world would be regenerated as it was?

    While writing this I'm thinking I could be generating the seed of a block of tiles/pixels based on the seed of neighboring blocks and the coordinates in the map, something like left: seed+X, right: seed-Y, where X and Y are calculated based on the coordinate of the block. This way I can save the seed of the current block and easily recalculate the seed used to generate all the adjacent blocks. What do you think about this approach?

    0

    Use old android tablet for dashboard

    I have an old android tablet (and several phones) that I want to use for small applications in my home automation. For the most part just to show a web page to quickly click something to activate or read the status.

    My issue is the OS installed is very old and of course there are no official updates. Looking for custom roms they are also somewhat old because the age of the devices, and everyone says "don't use the rom of one device into another even if the models are very similar".

    So, my question is, what are my options if I can't use a pre-built rom? Could I keep the same OS and just restrict access to only my internal network? Not sure if I'm being too paranoid about security risks using these devices to just connect to my services.

    1

    S3 compatible video YT/twitch-like streaming service?

    What's your recommendation for a selfhosted services to stream some private videos from S3 compatible service (vultr)?

    I was thinking a private peertube instance could work, but it requires the S3 files to be public and allow all origins, so I don't like that idea.

    The other one was to use rclone mount to have it as another block storage, but I don't know what are the cons of this, or if it's possible to use it with this kind of services.

    This won't be for my camera videos (already have immich) nor for series/movies (jellyfin). It'll be for random videos from youtube, or twitch which I want to hoard.

    (Also if you have a recommendation for cheap online storage for this it'll be appreciated, Vultr's is $0.006/GB)

    4

    All mesh office chairs recommendations for long sessions?

    I've been looking for an all mesh chair since I tend to run hot so every chair I use ends up making me sweat.

    There's this one Naz President Full-Back Mesh but I can't find any reviews for it.

    There are also these two in amazon Razzor Ergonomic Mesh Office Chair and FelixKing Ergonomic Desk Chair but I've been reading mixed reviews (as well as any other chair in amazon)

    So, do you guys have any budget all mesh chair recommendation? Or maybe a chair which doesn't heat up so much or cools down quickly?

    (I currently have a gaming chair... worst purchase I've ever made for my back)

    3

    What's a good gethomepage-like project to show different type of information on a screen? Not only for deployed services

    I want to have something similar to a google's nest hub to display different type of information, like weather, bus times, my own services information, photo gallery, etc.

    It's not a problem if I have to manually write plugins for custom integrations. It'll be better if it's meant to be shown in a web browser.

    I remember there were some related to a screen for a digital mirror, or a kiosk screen, but I can't find a good one to selfhost and extends to my needs.

    The ones I've found are focused on showing stats of deployed services and quick links to them.

    4

    Croyez-vous que je dois attendre les soldes du black friday?

    Je jusqu'ai vu un rebais de 40% chez bureau en gros pour une chaise, mais je suis pas sure si cette rebais est suffisamment bon par comparaison à toute autre que pourrait être ca jour.

    Vous vous y attendiez? Ou je fais trop confiance à cette journée?

    (J'ai pas l'habitude des rebais de ce journée, alor je sais pas à que m'attendre)

    2

    Audio inserted with ffmpeg has no sound

    cross-posted from: https://lemmy.pe1uca.dev/post/317214

    I'm cross-posting it here to check if you guys know or if you have an idea which would be the best community to ask.

    > I'm trying to add the audio of a video to another one so I can have multiple tracks for my media service (Jellyfin). > > The issue is the new track doesn't have any sound, it's just there with the proper metadata but it's just mute. > I tried playing it with Jellyfin and with VLC. > The original audio tracks play fine. > > This is my command. > > ffmpeg -i .\3.english.mkv -i .\3.french.mp4 -c copy -map 0 -map 1:a:0 -y .\3.mix.mkv > > I also already tried adding -c:a:1 ac3 since this is the format of the audio in the mkv file. > > The data of the original audio of the mkv is > > Stream #0:1(eng): Audio: ac3, 48000 Hz, 5.1(side), fltp, 384 kb/s > Metadata: > title : English > BPS-eng : 384000 > DURATION-eng : 02:21:42.176000000 > NUMBER_OF_FRAMES-eng: 265693 > NUMBER_OF_BYTES-eng: 408104448 > _STATISTICS_WRITING_APP-eng: mkvmerge v25.0.0 ('Prog Noir') 64-bit > _STATISTICS_WRITING_DATE_UTC-eng: 2018-07-23 09:18:58 > _STATISTICS_TAGS-eng: BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES > > > The data of the file I'm trying to inject from the mp4 is > > Stream #0:1[0x2](fre): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 189 kb/s (default) > Metadata: > creation_time : 2014-04-17T18:14:55.000000Z > handler_name : movie.track_2.aac > vendor_id : [0][0][0][0] > > > do you guys have any idea of what might be the issue? > > I also tried extracting the audio to a file and the aac file works fine, it's just when adding it to the mkv which doesn't work.

    2

    Audio inserted has no sound

    I'm trying to add the audio of a video to another one so I can have multiple tracks for my media service (Jellyfin).

    The issue is the new track doesn't have any sound, it's just there with the proper metadata but it's just mute. I tried playing it with Jellyfin and with VLC. The original audio tracks play fine.

    This is my command. ffmpeg -i .\3.english.mkv -i .\3.french.mp4 -c copy -map 0 -map 1:a:0 -y .\3.mix.mkv I also already tried adding -c:a:1 ac3 since this is the format of the audio in the mkv file.

    The data of the original audio of the mkv is Stream #0:1(eng): Audio: ac3, 48000 Hz, 5.1(side), fltp, 384 kb/s Metadata: title : English BPS-eng : 384000 DURATION-eng : 02:21:42.176000000 NUMBER_OF_FRAMES-eng: 265693 NUMBER_OF_BYTES-eng: 408104448 _STATISTICS_WRITING_APP-eng: mkvmerge v25.0.0 ('Prog Noir') 64-bit _STATISTICS_WRITING_DATE_UTC-eng: 2018-07-23 09:18:58 _STATISTICS_TAGS-eng: BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES

    The data of the file I'm trying to inject from the mp4 is Stream #0:1[0x2](fre): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 189 kb/s (default) Metadata: creation_time : 2014-04-17T18:14:55.000000Z handler_name : movie.track_2.aac vendor_id : [0][0][0][0]

    do you guys have any idea of what might be the issue?

    I also tried extracting the audio to a file and the aac file works fine, it's just when adding it to the mkv which doesn't work.

    2

    Is wealthsimple better than EQ bank?

    I've seen a lot of people mentioning EQ bank to have the emergency save since it offers 2.5% interest.

    Just recently knew about wealthsimple with 4% https://www.wealthsimple.com/en-ca/spend

    So I'm wondering if you guys know any drawback on saving in wealthsimple instead of EQ.

    8

    Wealthsimple est-il une bonne option comme HISA?

    Est-ce que c'est CEHI en français ? Désolé, j'ai encore besoin d'utilizer un traducteur.

    La banque EQ semble être la meilleure avec 2,5 %. Mais WS donne 4% Alors, je me demande pourquoi il n'y a pas plus de gens qui recommandent WS et recommande toujours le QE.

    Y a-t-il quelque chose que je ne vois pas dans le WS? Dans la section "X", on peut lire "Remarque : Wealthsimple n'est ni une banque ni un membre de la SADC", cela pourrait-il être la raison?

    3

    Dois-je accepter une augmentation de loyer de 3,5 % pour 2025?

    Je pense que J'ai un peu foiré en acceptant une augmentation de 4 % pour 2024. J'ai lu ici les augmentations prévues pour cette année, mais l'entreprise a accepté mon offre immédiatement, ce qui me fait penser que cet article est erroné. Maintenant, je pense que l'offre de 3,5 % pour 2025 qu'ils m'ont faite est peut-être plus élevée qu'elle ne devrait l'être.

    Faut-il le rejeter maintenant et attendre l'année prochaine pour voir comment les choses se passent ? Devrais-je négocier moins ? Ou dois-je l'accepter ?

    4