I'm the administrator of kbin.life, a general purpose/tech orientated kbin instance.
Going to second other comments. Even without archinstall. It feels like it will be harder than it is. Umm, just save yourself a bit of time and configure the network and install a console editor (nano/vim whatever) while in the chroot (if going full manual). It was a minor pain to work around that for me.
There are pages discussing how to do everything (helps to have a laptop with browser, or a phone to look them up). At the end, you generally know exactly what you installed (OK no-one watches all the dependencies), and I've found any borks that happen easy to fix because I know what I installed.
I remember those times too. The difference today is that there are so many more libraries and projects use those libraries a lot more often.
So using configure and make means that the user also has the responsibility of ensuring all those libraries are up to date. Which again if we're talking about not using binary install, each also need a regular configure/make process too. It's not that unusual for large packages to have dependencies on 100+ libraries. At which point building and maintaining the build for all of them yourself becomes untenable really. However I think gentoo exists to automate a lot of this while still building from source.
I understand why binaries with references to other binary packages for prerequisites are used. I also understand where the limits of this are and why the AppImage/Flatpak/snaps exist. I just don't particularly like the latter as a concept. But accept there's times you might need them.
The current thing I'm working on (processor for iptv m3u files) isn't public yet, it's still in the very early stages. Some of the "learning to fly" rust projects I've done so far are here though:
https://git.nerfed.net/r00ty/bingo_rust (it's a multi-threaded bingo game simulator, that I made because of the stand-up maths video on the subject).
https://git.nerfed.net/r00ty/spectrum_screen (this is a port of part of a general CPU emulation project I did in C#, it emulates the ZX spectrum screen, you can load in the 6912 byte screens and it will show it in a 2x scaled window).
I think both of these are rather using Arc<RwLock<Thing>> because they both operate in a threaded environment. Bingo is wholly multi-threaded and the spectrum screen is meant to be used by a CPU emulator running in another thread. So not quite the same thing. But you can probably see a lot of jamming the wrong shape in the wrong hole in both of those.
The current project isn't multi-threaded. So it has a lot of the Rc/Rc<RefCell> action instead.
EDIT: Just to give the reason for Rc<RefCell> in the current project. I'm reading in a M3U file and I'm going to be referencing it against an Excel file. So in the structure for the m3u file, I have two BtreeMaps, one for order by channel number and one by name. Each containing references to the same Channel object.
Likewise the same channel objects are stored in the structure for the Excel file that is read in (searched for in the m3u file structure).
BTreeMaps used because in different scenarios the contents will be output in either name order or channel order. So just better to put them in, in that order in the first place.
"I think, the people of this country have had of experts" - Michael Gove, (during UK brexit campaign)
The problem with rust, I always find is that when you're from the previous coding generation like myself. Where I grew up on 8 bit machines with basic and assembly language that you could actually use moving into OO languages.. I find that with rust, I'm always trying to shove a round block in a square hole.
When I look at other projects done originally in rust, I think they're using a different design paradigm.
Not to say, what I make doesn't work and isn't still fast and mostly efficient (mostly...). But one example is, because I'm used to working with references and shoving them in different storage. Everything ends up surrounded by Rc<xxx> or Rc<RefCell<xxx>> and accessed with blah.as_ptr().borrow().x etc.
Nothing wrong with that, but the code (to me at least) feels messy in comparison to say C# which is where I do most of my day job work these days. But since I see often that things are done very different in rust projects I see online, I feel like to really get on with the language I need a design paradigm shift somewhere.
I do still persist with rust because I think it's way more portable than other languages. By that I mean it will make executable files for linux and windows with the same code that really only needs the standard libraries installed on the machine. So when I think of writing a project I want to work on multi platforms, I'm generally looking at rust first these days.
I just realised this is programmerhumor. Sorry, not a very funny comment. Unless you're a rust developer and laughing at my plight of trying to make rust work for me.
I looked at that. Actually I would argue that was even more negligence by the management there. I mean they couldn't even say how long he'd not been working for.
But in reality he was paid for at least 6 years of work (and they suspected more) and only fined for 1 year of pay. So, he's still a winner I think. And yes, public funds likely did help in bringing that case forward.
Most larger private businesses tend to avoid going to a court for such things unless they need to in my experience.
Would have been funnier if you just replied "Groundhog Day (1993)" again.
You can make fun of managers not doing work. You know what's worse than someone at manager/director level that doesn't do any work? One that insists on doing so! Trust me, first hand experience.
I don't know if they have much of a case to sue you, if you fall through the cracks on their own negligence. Fire you, yes. Sue, I am doubtful most larger businesses would even try. They'd rather solve the problem and sweep it under the carpet in my experience. Not USA experience of course, but still the attitude would be similar I expect.
I would worry a bit about whether they're allowed to give negative references though. Because if so, it might not be so easy to get another job after.
Best move would be to line up another job to start like a month before the review, and never reach the review stage. Even if discovered, most people that would "know" wouldn't really be driven to report anything if they're leaving anyway. The "not my problem, and this will make it my problem" attitude in big companies is real.
Yes, but it seems the French language pack is a dependency for pretty much everything else! Who knew?
This does tally up with what I've been hearing. Where I'm at there's been a few hires straight into senior. I've not heard of an official junior freeze. At the same time it's been a long time since I've seen a new one.
The problem, as I commented prior, is that if we no longer bring in junior devs to gain this kind of experience, we lose the flow of junior -> senior. But in most places, the people making the decisions won't consider anything beyond the end of the current fin year.
I don't think developers are doing it. It's managers making this kind of decision I'd say.
I've been told about companies in the same field as mine with a hiring freeze on juniors. So it's kinda second hand.
I think it goes further than that. There's two things happening with regard to AI and software development.
1: Stack overflow has become less common as a resource to solve problems. This, as you say has a problem of input into LLMs for future problems to solve.
2: Junior developers are being hired less because of AI. I assume the idea is that seniors will use AI in the same way they would usually use juniors. Except, they've done what business always does. Not think one bit about the future. Today's senior developers are yesterdays junior developers.
The combination of AI performance drop due to point 1, and the lack of new developers because of point 2 makes for potentially, a bad future for the profession.
Specifically answering this question. It works transparently with IPv4. Organisations running servers can run both IPv4 and IPv6 operations with very little effort on their part. ISPs can deploy this and router makers include support with only a reasonable amount of effort.
As users AND servers get IPv6 addresses, in the background they will just be used. At some point there would be so much IPv6 adoption they could turn off IPv4. There is a thing called "6to4" but dual stack has (I think rightly) became the main way people run both.
In the UK I think at least half the ISPs provide IPv6 now. I think also in Europe it's the same or better. But still we're far from replacing IPv4 and I wonder when it might ever happen.
I'm going to just answer each point in turn. Maybe it's useful. I don't know.
It offers a shitload of IP addresses
It does. Generally most ISPs assign each user the equivalent of the IPv4 address space multiplied by itself. There's a lot of address space to go around.
They look really complicated
This is true. But you rarely need to remember a full IP address. Most resources you access via DNS. If you have servers on your own network you will probably need to remember your own prefix (first 3 or 4 blocks of 4 hex numbers) and your servers you want to access would likely be ::1 and ::2 etc in that allocation. So you'd learn them. Also most routers allow for local DNS entries and there's other things that will help here.
Something about every device in your local network being visible from everywhere?
This is a concern, but that's mostly because router makers now are often badly configuring their routers. The correct way to configure a router is to allow outgoing/established connections by default and block all incoming (until you specifically open a port). Once this is done the security is very similar to NAT.
Some claim it obsoletes NAT?
Yes, NAT was created to make a small address space work in an era of multiple internet consumers behind a single connection. But when each device can get a routable IPv6 address, NAT is not needed. However the security I talk about above IS essential to apply to consumer routers.
Now, I'll elaborate on some of the features of IPv6 (a lot of which are just not being used when they could have been).
IPv6 privacy extensions (RFC4951)
This allows normal client machines (the kind that would usually be behind NAT entirely) to have a similar level of security and privacy provided by NAT. One concern with just plain IPv6 with a fixed IPv6 allocation is that people could ID a specific machine from web logs etc and could be used against you in privacy terms. This extension ensures that you have multiple active IPv6 addresses. One could be the one you perhaps have some ports open on. That address will not be used for outgoing connections. A random IP will be used for outgoing connections and this IP will not have any ports open and will change frequently. I think on windows this is enabled by default (when you look in ipconfig you will often see multiple "temporary addresses").
Harder to portscan
Currently it doesn't take THAT long to portscan the whole IPv4 address space. And because almost every public address is hosting multiple hosts behind it, there's a good chance ports will be open on a lot of the IPs scanned.
With IPv6 the public address space is huge. With normal machines having their allocations made randomly within a huge allocation per user and every IP would still need every port scanned. This makes active port scanning much harder. The above privacy extensions also mean that passive port scanning (port scanning IPs found in web logs for example) is harder too.
User experience
Provided consumer routers are configured well from the factory and ISPs are making sensible decisions regarding allocation of address space, the user will benefit from the advantages and not even know they're using IPv6 in many cases. When you go to google/facebook/youtube etc you will be on IPv6 and not even know it.
We used to have it terrible in the UK in the 90s and 2000s. Basic ADSL was trialled in 1999 and available in maybe late 2000 I think. But it stagnated for a while.
When it came to fibre, interesting things are happening. As well as the "national" (although privatised) telco installing it, there are many independent companies fitting it. Where I live I have the option of the official telco (1000/110) and a private company (1000/1000). Of course I chose the latter :P
Some people have 3 or more options.
Yeah in the future there might well be a handful of overall winners that vacuum up the losers and carve up the territory. But right now, it's a good time for the normal people... At least for internet.
EDIT: Just to add, some are ISPs and will only sell their own product. Some are wholesale, so even if they're the only company in your area, you can often buy from multiple ISPs through them.
This one threw me off. I'd muted discord by mistake. Weirdly voice still works. I spent ages checking and double checking settings to see why I wasn't getting notification sounds and the ptt sound. Dismissing any mute possibility because voice was working.
When I found it was this....
I'm on a pretty old version of mbin (I have some modifications I made for federation issues back when it was kbin). I need to spend a weekend to pilot an upgrade and make sure I can run it safely live.
But even then it's better in some ways already and I never feel like I'm missing something from lemmy. But I think just calling the whole thing lemmy puts off people that are seeing things through a political lens.
Pretty sure that's only true about Lemmy. There are other threadiverse apps. The mistake is people calling the threadiverse lemmy.
He spoke at the SCO summit which took place virtually under Indian PM Narendra Modi's leadership.