• 1 Post
  • 24 Comments
Joined 9 months ago
cake
Cake day: April 7th, 2024

help-circle





  • If you have access to some sort of basic Linux system (cloud server, local server whatever works for you) you can run a program on a timer such as https://isync.sourceforge.io/ (Debian package: isync) which reads email from one source and clones it to another. Be careful and run it in a security context that meets your needs (I use a local laptop w/encryption at home that runs headless 24/7, think raspberry Pi mode).

    This includes IMAP (1) -> IMAP (2) as well as IMAP -> Local and so on; as with any app you’ll need to spend a bit learning how to build the optimum config file for your needs, but once you get it going it’s truly a “set and forget” little widget. Use an on-fail service like https://healthchecks.io/ in your wrapper script to get notified on error, then go about your life.

    Edit: @mike_wooskey@lemmy.thewooskeys.com glanced at your comments and see you have a lot of self-hosting chops, here’s a markdown doc of mine to use isync to clone one IMAP provider (domain1.com) to another IMAP provider (domain2.com) subfolder for archiving. (using a subfolder allows you to go both ways and use both domains normally)

    ----

    Sync email via IMAP from host1/domain1 to a subfolder on host2/domain2 via a cron/timer. Can be reversed as well, just update Patterns to exclude the subfolders from being cross-replicated (looped).

    • Install the isync package: apt-get update && apt-get install isync

    Passwords for IMAP must be left on disk in plain text

    • Generate “app passwords” at the email providers, host1 can be READ only
    • Keep ${HOME}/.secure contents on encrypted volume unlocked manually

    The mbsync program keeps it’s transient index files in ${HOME}/.mbsync/ with one per IMAP folder; these are used to keep track of what it’s already synced. Should something break it may be necessary to delete one of these files to force a resync.

    By design, mbsync will not delete a destination folder if it’s not empty first; this means if you delete a folder and all emails on the source in one step, a sync will break with an error/warning. Instead, delete all emails in the folder first, sync those deletions, then delete the empty folder on the source and sync again. See: https://sourceforge.net/p/isync/mailman/isync-devel/thread/f278216b-f1db-32be-fef2-ccaeea912524%40ojkastl.de/#msg37237271

    Simple crontab to run the script:

    0 */6 * * * /home/USER/bin/hasync.sh
    

    Main config for the mbsync program:

    ${HOME}/.mbsyncrc

    # Source
    IMAPAccount imap-src-account
    Host imap.host1.com
    Port 993
    User user1
    PassCmd "cat /home/USER/.secure/psrc"
    SSLType IMAPS
    SystemCertificates yes
    PipeLineDepth 1
    #CertificateFile /etc/ssl/certs/ca-certificates.crt
    
    # Dest
    IMAPAccount imap-dest-account
    Host imap.host2.com
    Port 993
    User user2
    PassCmd "cat /home/USER/.secure/pdst"
    SSLType IMAPS
    SystemCertificates yes
    PipeLineDepth 1
    #CertificateFile /etc/ssl/certs/ca-certificates.crt
    
    # Source map
    IMAPStore imap-src
    Account imap-src-account
    
    # Dest map
    IMAPStore imap-dest
    Account imap-dest-account
    
    # Transfer options
    Channel hasync
    Far :imap-src:
    Near :imap-dest:HASync/
    Sync Pull
    Create Near
    Remove Near
    Expunge Near
    Patterns *
    CopyArrivalDate yes
    

    This script leverages healthchecks.io to alert on failure; replace XXXXX with the UUID of your monitor URL.

    ${HOME}/bin/hasync.sh

    #!/bin/bash
    
    # vars
    LOGDIR="${HOME}/log"
    TIMESTAMP=$(date +%Y-%m-%d_%H%M)
    LOGFILE="${LOGDIR}/mbsync_${TIMESTAMP}.log"
    HCPING="https://hc-ping.com/XXXXXXXXXXXXXXXXXXXXXXXXX"
    
    # preflight
    if [[ ! -d "${LOGDIR}" ]]; then
      mkdir -p "${LOGDIR}"
    fi
    
    # sync
    echo -e "\nBEGIN $(date +%Y-%m-%d_%H%M)\n" >> "${LOGFILE}"
    /usr/bin/mbsync -c ${HOME}/.mbsyncrc -V hasync 1>>"${LOGFILE}" 2>&1
    EC=$?
    echo -e "\nEC: ${EC}" >> "${LOGFILE}"
    echo -e "\nEND $(date +%Y-%m-%d_%H%M)\n" >> "${LOGFILE}"
    
    # report
    if [[ $EC -eq 0 ]]; then
      curl -fsS -m 10 --retry 5 -o /dev/null "${HCPING}"
      find "${LOGDIR}" -type f -mtime +30 -delete
    fi
    
    exit $EC
    







  • scsi@lemm.eetoAsk Lemmy@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    2 months ago

    Along this line of thinking, I use Lemmy and Mastodon as complementary rather than competing, but not in the way people want/use X/Bluesky. Lemmy (reddit) is great for the use as you outline, Mastodon (and Pixelfed) supply a visual experience if you make it work that way and don’t expect/want an X like experience (so think more Instagram). Lemmy lacks multireddits which could solve some of this Mastodon use case, on reddit I have a multireddit named “Gallery” which combines a dozen picture-only subreddits.

    One can follow hashtags like #photography or #catsofmastodon, discover like-minded profiles who only post pictures and minimal talk/chatter (a lot of actual skilled photographers are present) and follow those profiles. It provides an experience that rounds out Lemmy, but I do admit I would love a “gallery” like view in the apps to streamline the hashtag viewing (Pixelfed does this specifically, but people are spread all over the planet - Mastodon proper pulls in federated data easier, IMHO)


  • To try and bake down the complex answers, if you are basically familiar with PGP or SSH keys the concept of a Passkey is sort of in the same ballpark. But instead of using the same SSH keypair more than once, Passkeys create a new keypair for every use (website) and possibly every device (e.g. 2 phones using 1 website may create 2 sets of keypars, one on each device) - and additionally embeds the username (making it “one-click login”):

    • creating a passkey is the client and server establishing a ring of trust (“challenge”) and then generating a public and private pair of keys (think ssh-keygen ...)
    • embedded in the keypair is the user ID/username and credential ID, which sort of maps to the three fields of a SSH keypair (encryption type, key, userid optional in SSH keys) but not really, think concept not details
    • when using a passkey, the server sends the client a “challenge”, the client prompts the user to unlock the private key (device PIN, biometric, Bitwarden master password, etc.)
    • the “challenge” (think crypto math puzzle) is signed with the private key and returned to the server along with the username and credential ID
    • the server, who has stored the public key, looks it up using the username + credential ID, then verifies the signature somewhat like SSH or PGP does
    • like SSH or PGP, this means the private key never leaves the device/etc. being used by the client and is used to only sign the crypto math puzzle challenge

    The client private key is stored hopefully in a secure part of the phone/laptop (“enclave” or TPM hardware module) which locks it to that device; using a portable password manager instead such as Bitwarden is attractive since the private keys are stored in BW’s data (so can be synced across devices, backed up, etc.)

    They use the phrase “replay” a lot to mean that sending the same password to a website is vulnerable to it being intercepted and used n+1 times (hacker); in the keypair model this doesn’t happen because each “challenge” is a unique crypto math puzzle generated dynamically every use, like TOTP/2FA but “better” because there’s no simple hash seed (TOTP/2FA use a constant seed saved by the client but it’s not as robust crypto).


  • The other data shows that posts and comments are going up linearly (a little suspicious but OK), but I wonder how the modlog affects the data (meaning how is it captured and when). I made one comment to a honest post yesterday (hosted on a remote instance), which then the post was deleted by admins like so:

    Removed Post Any app for call recording ? reason: Rule 2: Please use !askandroid@lemdro.id for support questions.

    So my comment shows in my history but cannot actually be accessed; was this comment counted? was that post counted? Was I counted as an active user yesterday if that was the only activity I did all day? Was the one person who upvoted my comment before the thread was deleted counted?

    Lies, damn lies and statistics. :)



  • scsi@lemm.eetoNo Stupid Questions@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    3 months ago

    As a sort of historical side comment regarding your concern about misinformation - “how much does it cost to register one?” has been the litmus test to use for a long time (I’m of an age). More specific to .info, it was one of the very first “new” TLDs introduced in 2002/2003 and the owners basically gave away millions of domains for free to gain market share.[1]

    This led to a lot of scammers, hackers, malware and whatnot infecting the entire .info TLD and it was in trouble by having the entire thing blocked even around 2012, almost 10 years after introduction.[2] It was troubled with new “crackdowns” (enforcement rules) as well due to it’s overwhelming use for nefarious purposes.[3]

    Ad-hoc data from my own employment experience, in 2024 it’s still 100% blocked (like ref[2]) by corporate firewalls who leverage strict rules along with many others who had the same troubled history (.xyz to name one) and the whole list of “free” domains. However, .info now generally costs $20 USD/yr (with many places offering first year discount for less than $5 USD) so I think it’s trying to turn itself around.

    Point being, “unrestricted” TLDs which are super cheap have had the historical tendency to attract scammers, phishers, malware and other nefarious entities because the cost of doing business at scale (these guys register hundreds of domains to churn through for short periods of time - “keep moving, don’t get caught” i.e.). Having lived through this whole saga, I open all TLDs I know to be cheap/free in private/incognito tabs and treat them with suspicion at first.



  • I have been using Linux on laptops as main/only compute since around 1997 (started with an Inspiron 4000, PII-400 IIRC), Dell is generally extremely boring and very Linux/BSD compatible. I have been buying gently used Precision models (typically using local marketplace, Craigslist in USA) as they tend to have better build quality and non-janky custom parts (think “winmodem”). They last forever, pretty much every Linux/BSD distro works. The most important thing is to stay away from Broadcom chips and look for Intel eth/wifi. Stay away from Inspiron to avoid hardware problems, in modern times those are the bottom of the barrel janky hardware.

    The Dell Latitude line used by businesses are even more boring than Precisions and really always have been - their BIOS has a somewhat unique charging profile “always plugged in” to extend battery life - I use two ancient E6330 models tuned to super low power modes as mini-servers (think anything you’d use a raspberry Pi for) that have been chugging away for probably 5+ years just running cron jobs, backups, Syncthing services and whatever I toss on them. Throw an SSD in anything and it just works - power goes out, batteries act as UPS. $100 USD each, “just work”.

    Thinkpads have always been a Linux favorite, at least the old models when IBM owned the brand but not too sure about the Lenovo modern ones. Last Thinkpad I owned was a 32bit one back in like maybe 2010 and it worked just fine. They tend to be more expensive used than Dells (retain their purchase price better, like a nice used auto).



  • At the quantity the OP might use, buying by the gallon might make more sense - having a look to Amazon, the popular concentrations in gallon+ sizes are 70% and 99.9% (about the same price, $25 USD/gal) - it probably makes more logistical sense to go with 70% here to reduce evaporation and increase usable liquid on these tall, thin objects (so let’s say “sloppy use” of oddly shaped hard to handle glass).

    I’ll leave my update at 70% concentration as the more economical choice - I’d presume based on their comment a soak in ZAP ($18 USD/gal) first is needed, then followed by the iso method… so it’s a little expensive no matter what for something they might not care about that much.