Not fishy at all! It's like a lockpicking fan asking about locksport.
If you're looking for examples, GitHub has a lot of CVE proof-of-concepts and there are lots of payload git repos across git hosts in general, but if you're looking for a one-stop-shop "Steal all credentials," or "Work on all OSes/architectures just by switching the compile target," then you'll have a harder time. (A do-one-thing-well approach is more maintainable after all.)
If you want to make something yourself that still tries to pull off the take-as-much-as-you-can, you should just search up how different apps store data and whether it's easy to grab. Like, where browsers store their cookies, or the implications of X11's security model (Linux-specific), or where Windows/Windows apps' credentials and hashes are stored. Of course, there's only much a payload can do without a vulnerability exploit to partner with (e.g. Is privilege escalated? Are we still in userland? is this just a run-of-the-mill Trojan?).
Apologies if my answer is too general.
obligatory navier-stokes equation
How do people make and save kaomoji art?
This site is so cool!
python /> フ | _ _| /` ミ_xノ / | / ヽ ノ │ | | | / ̄| | | | ( ̄ ヽ__ヽ_)__) \二)
But how do people make these? I searched online and the best I could find were small Japanese communities still using MS Gothic (which is metrically incompatible with Arial/more-used fonts) and halfhearted JPG-to-ASCII-bitmap converters.
Further, how do people manage these? I'd imagine an emoji search, but these millionfold emoticons don't have names; and the other alternatives are "I've got a meme for that scrolls down infinite camera roll" or searching them up every time.
⠀/\\_/\\ (˶ᵔ ᵕ ᵔ˶) thanks lol / >🌷<\~♡
Grandiloquent/sesquipedalian. It's what you get when you use everything in this thread ₍^ >ヮ<^₎ .ᐟ.ᐟ
/s
Specifically, it refers to a deep understanding.
[A critic] notes that [the coiner's] first intensional definition is simply "to drink", but that this is only a metaphor "much as English 'I see' often means the same as 'I understand'". (from Wikipedia)
When you claim to "grok" some knowledge or technique, you are asserting that you have not merely learned it in a detached instrumental way but that it has become part of you, part of your identity. For example, to say that you "know" Lisp is simply to assert that you can code in it if necessary – but to say you "grok" Lisp is to claim that you have deeply entered the world-view and spirit of the language, with the implication that it has transformed your view of programming. Contrast zen, which is a similar supernatural understanding experienced as a single brief flash. (The Jargon File; also quoted on Wikipedia)
In 2003, Bill Burr wrote “NIST Special Publication 800-63. Appendix A” -- a security document that recommended passwords be changed every 90 days, and have irregular caps and special characters. When asked about it, and the resultant trends in people adding !@#$%^&*() to the end of their passwords, Burr said something enlightening:
Lmao
so yeah I hit the Bitwarden generate button and forget
Whoa, I didn't know about this! My trustworthy beloved orange apps were sold to ZipoApps, a company that flips apps into ad revenue.
But has anything changed for the worse yet? I don't see any odd commits in the history (e.g. Draw). I'll probably just lock the F-Droid version of the Simple gear I can't switch.
-1 accuracy point ( ◞ ﹏ ◟)
linux 4.5-rc5 had efivarfs fixed to prevent "rm -rf /" bricking uefi motherboards -- so maybe someone can try it out? :]
Speaking of fearmongering, you note that:
an artist getting their style copied
So if I go to an art gallery for inspiration I must declare this in a contract too? This is absurd. But to be fair I’m not surprised. Intellectual property is altogether an absurd notion in the digital age, and insanity like “copyrighting styles” is just the sharpest most obvious edge of it.
I think also the fearmongering about artists is overplayed by people who are not artists.
Ignoring the false equivalency between getting inspiration at an art gallery and feeding millions of artworks into a non-human AI for automated, high-speed, dubious-legality replication and derivation, copyright is how creative workers retain their careers and find incentivization. Your Twitter experiences are anecdotal; in more generalized reality:
- Chinese illustrator jobs purportedly dropped by 70% in part due to image generators
- Lesser-known artists are being hindered from making themselves known as visual art venues close themselves to known artists in order to reduce AI-generated issues -- the opposite of democratizing art
- Artists have reported using image generators to avoid losing their jobs
- Artists' works, such as those by Hollie Mengert and Karen Hallion among others, have been used without their compensation, attribution, nor consent in training data -- said style mimicries have been described as "invasive" (someone can steal your mode of self-expression) and reputationally damaging -- even if the style mimicries are solely "surface-level"
The above four points were taken from the Proceedings of the 2023 AIII/ACM Conference on AI, Ethics, and Society (Jiang et al., 2023, section 4.1 and 4.2).
Help me understand your viewpoint. Is copyright nonsensical? Are we hypocrites for worrying about the ways our hosts are using our produced goods? There is a lot of liability and a lot of worry here, but I'm having trouble reconciling: you seem to be implying that this liability and worry are unfounded, but evidence seems to point elsewhere.
Thanks for talking with me! ^ᴗ^
(Comment 2/2)
Thanks for the detailed reply! :P
I'd like to converse with every part of what you pointed out -- real discussions are always exciting!
...they pay the journals, not the other way around...
Yes of course. It’s not at all relevant?
It's arguably relevant. Researchers pay journals to display their years of work, then these journals resell those years of work to AI companies who send indirect pressure to researchers for more work. It's a form of labor where the pay direction is reversed. Yes, researchers are aware that their papers can be used for profit (like medical tech) but they didn't conceive that it would be sold en masse to ethically dubious, historically copyright-violating, pollution-heavy server farms. Now, I see that you don't agree with this, since you say:
...not only is it very literally transparent and most models open-weight, and most libraries open-source, but it’s making knowledge massively more accessible.
but I can't help but feel obliged to share the following evidence.
- Though a Stanford report notes that most new models are open source (Lynch, 2024), the models with the most market-share (see this Forbes list) are not open-source. Of those fifty, only Cleanlab, Cohere, Hugging Face (duh), LangChain (among other Python stuff like scikit-learn or tensorflow), Weaviate, TogetherAI and notably Mistral are open source. Among the giants, OpenAI's GPT-4 et al., Claude, and Gemini are closed-source, though Meta's LLaMa is open-source.
- Transparency is... I'll cede that it is improving! But it's also lacking. According to the Stanford 2024 Foundation Model Transparency Index, which uses 100 indicators such as data filtration transparency, copyright transparency, and pollution transparency (Bommasani et al., 2024, p. 27 fig. 8), developers were opaque, including open-source developers. The pertinent summary notes that the mean FMTI company score improved from 37 to 58 over the last year, but information about copyright data, licenses, and guardrails have remained opaque.
I see you also argue that:
With [the decline of effort in average people's fact-finding] in mind I see no reason not to feed [AI] products of the scientific method, [which is] the most rigorous and highest solution to the problems of epistemology we’ve come up with thus far.
And... I partly agree with you on this. As another commenter said, "[AI] is not going back in the bottle", so might as well make it not totally hallucinatory. Of course, this should be done in an ethical way, one that respects the rights to the data of all involved.
But about your next point regarding data usage:
...if you actually read the terms and conditions when you signed up to Facebook... and if you listened to the experts then you and these artists would not feel like you were being treated unfairly, because not only did you allow it to happen, you all encouraged it. Now that it might actually be used for good, you are upset. It’s disheartening. I’m sorry, most of you signed it all away by 2006. Data is forever.
That's a mischaracterization of a lot of views. Yes, a lot of people willfully ignored surveillance capitalism, but we never encouraged it, nor did we ever change our stance from affirmatory to negative because the data we intentionally or inadvertently produced began to be "used for good". One of the earliest surveillance capitalism investigators, Harvard Business School professor Shoshana Zuboff, confirms that we were just scared and uneducated about these things outside of our control.
"Every single piece of research, going all the way back to the early 2000s, shows that whenever you expose people to what’s really going on behind the scenes with surveillance capitalism, they don’t want anything to do [with] it. The only reason we keep engaging with it is because we feel like we have no choice. ...[it] is a colossal market failure. Because it is not giving people what people want. ...everything that's inside that choice [i.e. the choice of picking between convenience and privacy] has been designed to keep us in ignorance." (Kulwin, 2019)
This kind of thing -- corporate giants giving up thousands of papers to AI -- is another instance of people being scared. But it's not fearmongering. Fearmongering implies that we're making up fright where it doesn't really exist; however, there is indeed an awful, fear-inducing precedent set by this action. Researchers now have to live with the idea that corporations, these vast economic superpowers, can suddenly and easily pivot into using all of their content to fuel AI and make millions. This is the same content they spent years on, that they intended for open use in objectively humanity-supporting manners by peers, the same content they had few alternative options for storage/publishing/hosting other than said publishers. Yes, they signed the ToS and now they're eating it. We're evolving towards the future at breakneck pace -- what's next? they worry, what's next?
(Comment 1/2)
Hmm, that makes sense. The toothpaste can't go back into the tube, so they're going a bit deeper to get a bit higher.
That does shift my opinion a bit -- something bad is at least being made better -- although the "let's use more content-that-wants-to-be-open in our closed-content" is still a consternation.
Obligatory Linux comment (Lemmy moment):
Windows is used often for its compatibility and defaultness but Linux is interesting in the sense that everything is patchable, everything is tinkerable and configurable. The low resistance to tinkering makes lots of Linux users tinkerers -- including tinkering via code.
I'm not saying wipe your hard drive or even dual-boot. Maybe an older computer or VM could help, depending on what you have. But just in the past week I've screwed around in low-to-medium-difficulty Linux projects that configured my lockscreen with C, that implemented mildly usable desktop GUIs with TypeScript, among others -- just not-too-committal stuff that has a return value I literally see every time I lock my computer.
Windows equivalent projects can be harsher on the beginning-to-intermediate curve (back when I first tried out Linux Mint, I'd been struggling to make a bookmark inspector in Visual Studio -- ended up Pythoning it instead) -- not to say that Windows fun is by any means out-of-reach.
My friends Leetcoded and Codeforced quite a lot. Advent of Code is up there too, with the interesting caveat that Advent of Code also teaches you refactoring (due to the two-part nature of every problem).
However, when I was younger I had contempt for the whiteboard-problem-esque appearances of these, but everyone is different.
If you look hard enough there is always a project at medium difficulty -- not way too hard, like a huge project you feel won't give you returns -- not way too easy, like some cowsay clone. Ever tried making a blog? You can host for free on most Git pages implementations (codeberg, github, gitlab...).
As for programming books, consider trying security books like Art of Exploitation -- in the same strain, CTFs can use a decent amount of code, and they're fun in terms of raw problem-solving. I started with the Bandit wargame, which does Linux problem solving from any machine that has SSH.
I'm not by any means a l33t hax3r but I found them pretty fun in my learning journey.
Despite the downvotes I'm interested why you think this way...
The common Lemmy view is that morally, papers are meant to contribute to the sum of human knowledge as a whole, and therefore (1) shouldn't be paywalled in a way unfair to authors and reviewers -- they pay the journals, not the other way around -- and (2) closed-source artificially intelligent word guessers make money off of content that isn't their own, in ways that said content-makers have little agency or say, without contributing back to the sum of human knowledge by being open-source or transparent (Lemmy has a distaste for the cloisters of venture capital and multibillion-parameter server farms).
So it's not about using AI or not but about the lack of self-determination and transparency, e.g. an artist getting their style copied because they paid an art gallery to display it, and the art gallery traded rights to image generation companies without the artists' say (although it can be argued that the artists signed the ToS, though there aren't any viable alternatives to avoiding the signing).
I'm happy to listen if you differ!
Isn't that because the peers also write stuff? So it's not just a fixed delay on one-by-one papers, but a delay that goes between peers' periods of working on papers too.
I... don't have ADHD (relatively confident) but I've used both of your hacks before and they've measurably helped me.
The templating thing slung me over its shoulder and carried me through battlefields. Procrastinate 'til the last hour? Assignment must be in LaTeX? Don't worry, everything is already formatted, just add the double-dollar-signs and equate!
Bored? Need to get this article done but it'll be even more boring? Watch random dubbed animations or something while hitting the keys -- low-pressure colors and music cushions the harder-thinking part. Somehow the perceived expenditure of I Need To Focus mutes itself!
(Footgun if the side-video is too interesting.)
First thing I'd ever seen on the darknet was this bad boy. (Not that it was a terribly efficient way to get an epub.)
Such a bottom-up book. Almost gave up back then, thinking I wouldn't be able to handle assembly, but then what would the point of reading about the hacker mindset be?
Lmao it's not Lemmy without Linux
noh8
Oh, you're right. You just pass the -d
detach flag. I stand corrected!
According to tab autocomplete...
$ git
zsh: do you wish to see all 141 possibilities (141 lines)?
But what about the sub options?
$ git clone https://github.com/git/git
$ cd git/builtin
# looking through source, options seem to be declared by OPT
# except for if statements, OPT_END, bug checks, etc.
$ grep -R OPT_ | grep --invert-match --count -E \
"OPT_END|BUG_ON_OPT|if |PARSE_OPT|;$|struct|#define"
1517
Maybe 1500 or so?
edit: Indeed, maybe this number is too low. git show
has a huge amount of possibilities on its own, though some may be duplicates and rewords of others.
$ git show --
zsh: do you wish to see all 489 possibilities (163 lines)?
$ man git-show | col -b | grep -E "^ -" --count
98
An attempt at naively parsing the manpages gives a larger number.
$ man $(find /usr/share/man -name "git*") \
| col -b | grep -E "^ -" -c
1849
Numbers all over the place. I dunno.
Huh, TIL.
To be fair, git switch
was also derived from the features of git checkout
in >2.23, but like git restore
, the manual page warns that behavior may change, and neither are in my muscle memory (lmao).
I'll probably keep using checkout since it takes less kb in my head. Besides, we still have to use checkout for checking out a previous commit, even if I learn the more ergonomically appropriate No deprecation here so...switch
and restore
.
edit: maybe I got that java 8 mindset
edit 2: Correction -- git switch --detach
checks out previous commits. Git checkout may only be there for old scripts' sake, since all of its features have been split off into those two new functions... so there's nothing really keeping me from switch
.
Thoughts on parental controls?
I saw a post recently about someone setting up parental controls -- screentime, blocked sites, etc. -- and it made me wonder.
In my childhood, my free time was very flexible. Within this low-pressure flexibility I was naturally curious, in all directions -- that meant both watching brainteaser videos, and watching Gmod brainrot. I had little exposure to video games other than Minecraft which ran poorly on my machine, so I tended to surf Flash games and YouTube.
Strikingly, while watching a brainteaser video, tiny me had a thought:
> I'm glad my dad doesn't make me watch educational videos like the other kids in school have to.
For some reason, I wanted to remember that to "remember what my thought process was as a child" so that memory has stuck with me.
Onto the meat: if I had had a capped screentime, like a timer I could see, and knew that I was being watched in some way, I'd feel pressure. For example,
> 10 minutes left. Oh no. I didn't have fun yet. I didn't have fun yet!!
> Oh no, I'm gonna get in so much trouble for watching another YTP...
and maybe that pressure wouldn't have made me into an independent, curious kid, to the person I am now. Maybe it would've made me fearful or suspicious instead. I was suspicious once, when one of my parents said "I can see what you browse from the other room" -- so I ran the scientific method to verify if they were. (I wrote "HI MOM" on Paint, and tested if her expression changed.)
So what about now? Were we too free, and now it's our job to tighten the next generation? I said "butthead" often. I loved asdfmovie, but my parents probably wouldn't have. I watched SpingeBill YTPs (at least it's not corporatized YouTube Kids).
Or differently: do we watch our kids without them knowing? Write a keylogger? Or just take router logs? Do we prosecute them like some sort of panopticon, for their own good?
Or do we completely forgo this? Take an Adventure Playground approach?
Of course, I don't expect a one-size-fits-all answer. Where do you stand, and why?
Julia Evans' Git cheat sheet
Git cheat sheets are a dime-a-dozen but I think this one is awfully concise for its scope.
- Visually covers branching (WITH the commands -- rebasing the current branch can be confusing for the unfamiliar)
- Covers reflog
- Literally almost identical to how I use git (most sheets are either Too Much or Too Little)
What was your last RTFM adventure?
What was your last RTFM adventure? Tinker this, read that, make something smoother! Or explodier.
As for me, I wanted to see how many videos I could run at once. (Answer: 60 frames per second or 60 frames per second?)
With my sights on GPUizing some ethically sourced motion pictures, I RTFW, graphed, and slapped on environment variables and flags like Lego bricks. I got the Intel VAAPI thingamabob to jaunt by (and found that it butterized my mpv videos)
```bash $ pacman -S blahblahblahblahblahtfm $ mpv --show-profile=fast Profile fast: scale=bilinear dscale=bilinear dither=no correct-downscaling=no linear-downscaling=no sigmoid-upscaling=no hdr-compute-peak=no allow-delayed-peak-detect=yes $ mpv --hwdec=auto --profile=fast graphwar-god-4KEDIT.mp4
fucking silk
```
But there was no pleasure without pain: Mr. Maxwell F. N. 940MX (the N stands for Nvidia) played hooky. So I employed the longest envvars ever ```bash $ NVD_LOG=1 VDPAU_TRACE=2 VDPAU_NVIDIA_DEBUG=3 NVD_BACKEND=direct NVD_GPU=nvidia LIBVA_DRIVER_NAME=nvidia VDPAU_DRIVER=nvidia prime-run vdpauinfo GPU at BusId 0x1 doesn't have a supported video decoder Error creating VDPAU device: 1
stfu
``` to try translating Nvidia VDPAU to VAAPI -- of course, here I realized I rtfmed backwards and should've tried to use just VDPAU instead. So I did.
Juice was still not acquired.
Finally, after a voracious DuckDuckGoing (quacking?), I was then blessed with the freeing knowledge that even though post-Kepler is supposed to support H264, Nvidia is full of lies...
plaintext ______ < fudj > ------ \ ‘^----^‘ \ (◕(‘人‘)◕) ( 8 ) ô ( 8 )_______( ) ( 8 8 ) (_________________) || || (|| (||
and then right before posting this, gut feeling: I can't read. ``` $ lspci | grep -i nvidia ... NVIDIA Corporation GM108M [GeForce 940MX] (rev a2)
ArchWiki says that GM108 isn't supported.
Facepalm
```
SO. What was your last RTFM adventure?
How do I add autocompletion for my stfu
command?
I have a little helper command in ~/.zshrc
called stfu
.
```
stfu() {
if [ -z "$1" ]; then
echo "Usage: stfu <program> [arguments...]"
return 1
fi
nohup "$@" &>/dev/null &
disown
}
complete -W "$(ls /usr/bin)" stfu
`stfu` will run some other command but also detach it from the terminal and make any output shut up. I use it for things such as starting a browser from the terminal without worrying about `CTRL+Z`, `bg`, and `disown`.
$ stfu firefox -safe-mode
Will not output stuff to the terminal, and
I can close the terminal too.
``` Here’s my issue:
On the second argument and above, when I hit tab, how do I let autocomplete suggest me the arguments and command line switches for the command I’m passing in?
e.g. stfu ls -<tab>
should show me whatever ls’s completion function is, rather than listing every /usr/bin
command again.
```
Intended completion
$ stfu cat -<TAB> -e -- equivalent to -vE --help -- display help and exit --number -n -- number all output lines --number-nonblank -b -- number nonempty output lines, overrides -n --show-all -A -- equivalent to -vET --show-ends -E -- display $ at end of each line --show-nonprinting -v -- use ^ and M- notation, except for LFD and TAB --show-tabs -T -- display TAB characters as ^I --squeeze-blank -s -- suppress repeated empty output lines -t -- equivalent to -vT -u -- ignored
Actual completion
$ stfu cat <tab> ...a list of all /usr/bin commands $ stfu cat -<tab> ...nothing, since no /usr/bin commands start with - ```
(repost, prev was removed)
EDIT: Solved.
I needed to set the curcontext
to the second word. Below is my (iffily annotated) zsh implementation, enjoy >:)
``` stfu() { if [ -z "$1" ]; then echo "Usage: stfu <program> [arguments...]" return 1 fi
nohup "$@" &>/dev/null & disown } #complete -W "$(ls /usr/bin)" stfu _stfu() {
Curcontext looks like this:
$ stfu <tab>
:complete:stfu:
local curcontext="$curcontext" #typeset -A opt_args # idk what this does, i removed it
_arguments \ '1: :_command_names -e' \ '*::args:->args'
case $state in args) # idk where CURRENT came from if (( CURRENT > 1 )); then # $words is magic that splits up the "words" in a shell command. # 1. stfu # 2. yourSubCommand # 3. argument 1 to that subcommand local cmd=${words[2]} # We update the autocompletion curcontext to # pay attention to your subcommand instead curcontext="$cmd"
# Call completion function _normal fi ;; esac } compdef _stfu stfu ```
Deduced via docs (look for The Dispatcher), this dude's docs, stackoverflow and overreliance on ChatGPT.
EDIT: Best solution (Andy)
``` stfu() { if [ -z "$1" ]; then echo "Usage: stfu <program> [arguments...]" return 1 fi
nohup "$@" &>/dev/null & disown } _stfu () {
shift autocomplete to right
shift words (( CURRENT-=1 )) _normal } compdef _stfu stfu ```