Salamander

  • 4 Posts
  • 22 Comments
Joined 3 years ago
cake
Cake day: December 19th, 2021

help-circle
  • Ooh, cool! 😁 That detector seems to be working only in “Geiger mode”, which means that it can count the number of X-rays/Gamma particles but it does not estimate their energy. So, the dedicated devices are still better in that they allow you to identify the source of the radiation by measuring the counts and the energy distribution simultaneously.

    It probably would not be too difficult to build the open gamma detector into something like a pinephone. I don’t think that has been done yet.



  • Yes. The camera pixels generate a current in response to light. You can add some filters to block certain wavelengths of light (like UV) from getting to the camera sensor, and tune the pixels so that they respond more to to specific colors. But X-rays and gamma rays can just pass through the filter. Often they will pass through sensor as well, but, in the cases that they do get absorbed by the sensor, they can also produce a current that to the camera’s readout electronics looks like other light would.

    The gamma detectors I mentioned are very very sensitive. They respond to single X-ray/Gamma ray particles. These detectors can count how many individual particles collide with a small crystal cube every second. These crystals are special in that they produce a very tiny flash of light when an X-ray or gamma particle collides with them. As an added bonus, these sensors can directly measure the energy of the particles by measuring the strength of the flash, and from this information they can construct not only the total counts but also a spectrum. With this extreme sensitivity these detectors can measure small quantities of radiation that come from space, from rocks, and from other materials.

    I looked for a video of a phone going through an X-ray machine, and found these:

    https://www.youtube.com/watch?v=E8iSoPhtY3s

    https://www.youtube.com/watch?v=V1YaroH6lHA

    The white specks that you can see near second 25 (first video) and second 34 (second video) could be a result of the X-rays. I am not sure, but it seems reasonable to me. On contrast, when I put my radiacode through the X-ray machine in the airport the radiacode reacts very strongly and becomes saturated.



  • I have used XMPP for some time now and I tried Matrix for a bit, but have stuck with XMPP until now.

    I found it practically very easy to set up a prosody XMPP server in a raspberry pi. In XMPP you have the core standard that is kept quite minimal and then you can extended your implementation using XMPP extension protocols (XEPs) in a highly modular fashion. This approach of building on top of a light core using well-documented extensions I like very much.

    With Matrix, JSON is used instead of XML. I think that JSON is a nice format when trying to look under the hood at how the message data is structured. XML is a bit of a pain to look at in my opinion. And I think JSON might be more efficient in how it moves the data around. So, that is a big positive for me. But I Matrix appears to be more focused on being feature rich than on having a flexible modular structure. While it does have extensions, successful extensions do have a chance of being eventually integrated into the core protocol. This makes the core feel bloated to me, because I have very minimal requirements.

    In terms of security, in XMPP you start with the core and then you select the type of encryption that you like (OpenPGP, OMEMO, etc). OMEMO encryption has plausible deniability built into its design, and for me, plausible deniability is a property that I consider important for messaging. The modular approach to XMPP also means that these are choices that one gets to make in an active manner, and the protocols are open protocols that come from outside of XMPP. With Matrix you get their encryption protocol as part of the core - it is a protocol that they designed and that you need to accept to use their tool with encryption. It is probably a good protocol, but I don’t think it has plausible deniability built in, and that’s a choice you did not get to make.

    As for moderation, I don’t know. Do they mean moderation tools, or the actual absence of moderators and unmoderated communities? Because the latter is more a property of the people using the tool that the tool itself. You can have your own private communities.

    If someone asks me, I could recommend Matrix but would rather recommend XMPP, depending on what they are looking for specifically.



  • If they can send me over the second half of my thesis I would appreciate it enormously! 😀

    The analytics tools that I am personally uncomfortable with involve dynamic, changing forms of data. I run GPSLogger on my phone (without a SIM card) and continuously log the GPS data to a text file. This data is then synced to my computer when WiFi is available. I can display this data on a map using gpx-viewer, and show very detailed tracking data of myself.

    I have explored this map with some friends/family. They get to see a time-stamped movie of my life - my trips to work, to the shop, when I go out, if I go on a trip, etc. The data displayed in this manner is somewhat intimate, personal information. Anyone I have shown this to has said that they would not be so comfortable with such a map of their lives existing… Well, if they are carrying a active phone with a SIM card, it does.

    To think that a company like Google can own such a map for a very large number of people makes me uncomfortable. On top of that, each of those map trajectories can be associated with an individual and their personality… They have the ability to pick out specific trajectories on the basis of the political ideologies or shopping behaviors of the personas behind them. This is extreme. I am of the opinion that the convenience afforded by a these technologies does not justify the allocation of that super-power to the companies that enable the technology.

    A few years ago Facebook enabled a “Graph search” feature. This allowed users to create search queries such as"Friends of friends of X who like the page “X” and went to school near Z". That tool seemed super cool on the surface, but it quickly became obvious how something like that could be easily exploited. Later on in Snowden’s book I learned about XKeyscore from the NSA, which is like an extra-powerful no-consent-needed graph search that is available to some people. This is not just targeted ads.

    I guess that what I am trying to convey is… For me, making the privacy-conscious choice is about not contributing to the ecosystem of very concrete tools that give super-powers to groups of people that may not have my best interest in mind. In my mind it is something very tangible and concrete, and I find many of those convenience tradeoffs to be clearly worth it.



  • I did not know of the term “open washing” before reading this article. Unfortunately it does seem like the pending EU legislation on AI has created a strong incentive for companies to do their best to dilute the term and benefit from the regulations.

    There are some paragraphs in the article that illustrate the point nicely:

    In 2024, the AI landscape will be shaken up by the EU’s AI Act, the world’s first comprehensive AI law, with a projected impact on science and society comparable to GDPR. Fostering open source driven innovation is one of the aims of this legislation. This means it will be putting legal weight on the term “open source”, creating only stronger incentives for lobbying operations driven by corporate interests to water down its definition.

    […] Under the latest version of the Act, providers of AI models “under a free and open licence” are exempted from the requirement to “draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the elements set out in Annex IXa” (Article 52c:1a). Instead, they would face a much vaguer requirement to “draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model according to a template provided by the AI Office” (Article 52c:1d).

    If this exemption or one like it stays in place, it will have two important effects: (i) attaining open source status becomes highly attractive to any generative AI provider, as it provides a way to escape some of the most onerous requirements of technical documentation and the attendant scientific and legal scrutiny; (ii) an as-yet unspecified template (and the AI Office managing it) will become the focus of intense lobbying efforts from multiple stakeholders (e.g., [12]). Figuring out what constitutes a “sufficiently detailed summary” will literally become a million dollar question.

    Thank you for pointing out Grayjay, I had not heard of it. I will look into it.






  • Search engines like google aggregate data from multiple sites. I may want to download a datasheet for an electronic component, find an answer to a technical question, find a language learning course site, or look for museums in my area.

    Usually I make specific searches with very specific conditions, so I tend to get few and relevant results. I think search engines have their place.


  • Fair enough. I just looked it up and if the scale in this image is correct, I agree that the size of the hole looks small in comparison. I also looked at the security video of the crash itself and it is frustrating how little we can see from it.

    Since this was such an important event and there seems to be a lack of specific pieces of essential evidence - either because of bad luck or because of a cover-up - I understand the skepticism. And I am not a fan of blindly believing any official narrative. But, without any context, if I see that photo and someone tells me that a plane crashed into that building, I would find it probable simply because the shape is so similar to the photo of the Bijlmer accident that I’m familiar with. A plane crash seems to me like a very chaotic process, so I don’t have a good expectation of what the damage should look like.

    Maybe I’ll look for a pentagon crash documentary some time.





  • I ordered four of the simpler devices this weekend (LilyGO T3-S3 LoRa 868MHz - SX1262) and I have been reading about antennas.

    Since I live in a city I am not super optimistic about the range. But I am still very curious about the concept, and I would love to be surprised.

    After doing some search about antennas, I have decided to test the following combination:

    I also have a vector network analyzer (LiteVNA) that can be used for checking antennas, so I will also try to build some antennas myself. I doubt that my custom antennas will approach the performance of the professional ones… But I just find it such a cool concept.

    Have you already gotten to play with it? What is your experience so far?




  • Thank you - that makes sense!

    I think I understand why this is done now. Most HTTP requests are hidden by the SSL encryption, and the keys to decrypt it are client-specific. So, if one wants to block ads at the network level without needing to get the SSL keys of every client that connects to the network, then this is the most specific amount of information that you can provide the PiHole with. The HTTP blocking needs to be set up in a client-specific manner, and that’s why they work well as browser extensions.