• 3 Posts
  • 38 Comments
Joined 1 month ago
cake
Cake day: January 1st, 2026

help-circle
  • My concern is that there is enough documentation on who isn’t calling people, who has no connection to anyone…

    If there were racist genocidal people within the system and they were able to recognize each other (via tattoos, memberships) and then assign each other to certain duties, if certain people were put into cargo planes and dumped over the ocean, I’m not sure anyone would know.

    It’s very hard for detainees to communicate with anyone in places like that if the detainee doesn’t have money or anyone willing to accept a call. If someone who was a racist genocidal person was put in charge of putting people into groups (based on lawyers, family contact, etc), people who were completely disconnected could be put into a unique group, put in a certain part of a facility, and flown in cargo planes together.

    I just don’t know how likely this is. CECOT is a facility that holds people in a way that many international organizations would consider torture. If people were being killed in there, including people deported by the USA, I am very sure people would not know.

    If we don’t know where 1200 people who were previously detained have gone as a society, and if there’s no record of where they were sent, it’s hard to fathom the state didn’t kill them, although I don’t know.

    I’m very ignorant on all of this. 25% was just a randomly chosen number. But what about 10% or 5%? I just don’t entirely understand the purpose of making immigrants being deported untrackable unless the point is to make it easier to kill them, because if the purpose was just to make it harder to legally contest deportation, the system could just be very slow but have slow tracking built into it… but it doesn’t have that, right? I really don’t know the answers to any of this.




  • They constantly measure DomRect using javascript, which is a unique hardware-based metric that can be used to track individual users.

    Imagine the cost of running duck.ai. What exactly is the revenue that it brings in?

    Of course, if it were some honeypot, using DomRects to track users (and DomRect is not protected by Tor Browser or Mullvad Browser etc), well then it doesn’t really matter if it’s not bringing in much revenue since it’s value is in being a honeypot.

    Yes, DomRect can be used legitimately in coding without tracking users… but why does ddg need to use this when they know that it CAN be used to track users and users have no way to audit the servers?

    It’s really interesting they measure DomRect and not Canvas when privacy-aware users often block canvas fingerprinting but don’t block DomRect.

    It’s sus



  • I can’t fathom that Signal is not a honeypot.

    Back when I tried to register, not only did they want a phone number (which usually links to IRL KYC stuff) but they also wanted me to complete a google captcha that took different metrics (canvas, etc) of my device.

    Why is that needed? They say it’s to reduce spam, I just don’t believe it it.

    Not only that, I can’t register using a linux system. I simply MUST register with a mobile device (that I will likely have on me) that can potentially track me through the cellular modem in the device and also likely has listening devices inside the device and a camera attached that is very hard to cover (because it’s embedded into the glass and and covering it with anything messes with the swipe up function).

    No organization would create something that is so incredibly hostile to people who don’t want mobile phones and don’t want numbers unless they were a honeypot. I even think that Signal was created in a large part to try to siphon popularity away from XMPP before it could reach mass adoption.




  • That’s exactly what I am trying to do, I’m just not that sure how to do it. I have the hardware needed, I just need to set up a docker with PyTorch and then find a way to set up Gradio inside that and then add TrOCR from hugging face, and then I’m good. I just am not totally sure how to do that and it seems hard, and when I ask AI for advice, it often is like “just run the following” and it’s wrong, and I’m not skilled enough to know why.






  • Terminal error after running GPT code:

    
    
    python3 trocr_pdf.py small.pdf output.txt
    Traceback (most recent call last):
      File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 479, in cached_files
        hf_hub_download(
      File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
        return fn(*args, **kwargs)
      File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1007, in hf_hub_download
        return _hf_hub_download_to_cache_dir(
      File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1124, in _hf_hub_download_to_cache_dir
        os.makedirs(os.path.dirname(blob_path), exist_ok=True)
      File "/usr/lib/python3.10/os.py", line 215, in makedirs
        makedirs(head, exist_ok=exist_ok)
      File "/usr/lib/python3.10/os.py", line 225, in makedirs
        mkdir(name, mode)
    PermissionError: [Errno 13] Permission denied: '/home/user/.cache/huggingface/hub/models--microsoft--trocr-base-handwritten'
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/home/user/Documents/trocr_pdf.py", line 39, in <module>
        main(pdf_path, out_path)
      File "/home/user/Documents/trocr_pdf.py", line 11, in main
        processor = TrOCRProcessor.from_pretrained(model_name)
      File "/home/user/.local/lib/python3.10/site-packages/transformers/processing_utils.py", line 1394, in from_pretrained
        args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
      File "/home/user/.local/lib/python3.10/site-packages/transformers/processing_utils.py", line 1453, in _get_arguments_from_pretrained
        args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
      File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py", line 489, in from_pretrained
        raise initial_exception
      File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py", line 476, in from_pretrained
        config_dict, _ = ImageProcessingMixin.get_image_processor_dict(
      File "/home/user/.local/lib/python3.10/site-packages/transformers/image_processing_base.py", line 333, in get_image_processor_dict
        resolved_image_processor_files = [
      File "/home/user/.local/lib/python3.10/site-packages/transformers/image_processing_base.py", line 337, in <listcomp>
        resolved_file := cached_file(
      File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 322, in cached_file
        file = cached_files(path_or_repo_id=path_or_repo_id, filenames=[filename], **kwargs)
      File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 524, in cached_files
        raise OSError(
    OSError: PermissionError at /home/user/.cache/huggingface/hub/models--microsoft--trocr-base-handwritten when downloading microsoft/trocr-base-handwritten. Check cache directory permissions. Common causes: 1) another user is downloading the same model (please wait); 2) a previous download was canceled and the lock file needs manual removal.
    

    LLMs are so bad at code sometimes. This happens all the time time with LLMs and code for me, the code is unusable and it saves no time because it’s a rabbit hole leading to nowhere.

    I also don’t know if this is the right approach to the problem. Any sort of GUI interface would be easier. This is also hundreds of pages of handwritten stuff I want to change to text.


  • that’s not for TrOCR, it’s just for OCR, which may not work for handwriting

    I did try some of the GPT steps:

    pip install --upgrade transformers pillow pdf2image
    
    

    getting some errors:

    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━ 3/4 [transformers]  WARNING: The scripts transformers and transformers-cli are installed in '/home/user/.local/bin' which is not on PATH.
      Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    mistral-common 1.5.2 requires pillow<11.0.0,>=10.3.0, but you have pillow 12.1.0 which is incompatible.
    moviepy 2.1.2 requires pillow<11.0,>=9.2.0, but you have pillow 12.1.0 which is incompatible.
    
    
    

    this is what GPT said to run, but it makes no sense because I don’t have TrOCR even downloaded or running at all.

    Install packages: pip install --upgrade transformers pillow pdf2image
    Ensure poppler is installed:
    
    Ubuntu/Debian: sudo apt install -y poppler-utils
    macOS: brew install poppler
    
    Execute: python3 trocr_pdf.py input.pdf output.txt
    

    That’s the script to save and run.

    #!/usr/bin/env python3
    import sys
    from pdf2image import convert_from_path
    from PIL import Image
    import torch
    from transformers import TrOCRProcessor, VisionEncoderDecoderModel
    
    def main(pdf_path, out_path="output.txt", dpi=300):
        device = "cuda" if torch.cuda.is_available() else "cpu"
        model_name = "microsoft/trocr-base-handwritten"
        processor = TrOCRProcessor.from_pretrained(model_name)
        model = VisionEncoderDecoderModel.from_pretrained(model_name).to(device)
    
        pages = convert_from_path(pdf_path, dpi=dpi)
        results = []
        for i, page in enumerate(pages, 1):
            page = page.convert("RGB")
            # downscale if very large to avoid OOM
            max_dim = 1600
            if max(page.width, page.height) > max_dim:
                scale = max_dim / max(page.width, page.height)
                page = page.resize((int(page.width*scale), int(page.height*scale)), Image.Resampling.LANCZOS)
    
            pixel_values = processor(images=page, return_tensors="pt").pixel_values.to(device)
            generated_ids = model.generate(pixel_values, max_length=512)
            text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
            results.append(f"--- Page {i} ---\n{text.strip()}\n")
    
        with open(out_path, "w", encoding="utf-8") as f:
            f.write("\n".join(results))
        print(f"Saved OCR text to {out_path}")
    
    if __name__ == "__main__":
        if len(sys.argv) < 2:
            print("Usage: python3 trocr_pdf.py input.pdf [output.txt]")
            sys.exit(1)
        pdf_path = sys.argv[1]
        out_path = sys.argv[2] if len(sys.argv) > 2 else "output.txt"
        main(pdf_path, out_path)
    
    

  • I don’t remember exactly, but I have rocm 7.2 installed, and there was something I was trying to install inside pip for rocm and it just wouldn’t work, it was like 7.2 rocm wasn’t out or the link didn’t work. The LLM tried multiple suggestion and they all failed, then I gave up. When I said “inside” pip, I don’t know if that’s accurate. I am very knew to pip and am decent at linux and only know a small amount of coding and lack python familiarity.




  • I am happy to participate. I do not want my IP and ping data sold to data brokers to serve targeted ads, track me, and go to the police surveillance state. I’m sure you are a good citizen who always keeps location on and feels like life would be easier if everyone just complied while you proudly put ring cameras on every door. Not everyone is a tech bro neo-feudalist.