Ask a programmer to review 10 lines of code, he'll find 10 issues. Ask him to do 500 lines and he'll say it looks good.
I am a human who transcribes posts to improve accessibility on Lemmy. Transcriptions help people who use screen readers or other assistive technology to use the site. For more information, see here.
Ah, that's too boring. I have a range of responses to pick from to keep things interesting:
LGTM
Nice
Looks good
Thanks
Looks great
:thumbsup:
Looks good to me
:shipit:
For me, no text means "I haven't really reviewed this properly so don't want to write anything that could be used against me if (when?) this breaks something in prod"
If you're in a place with codebase analytics you want to have at least one comment on every MR - otherwise the system will start to think you're falling behind...
I hate codebase analytics.
In my experience, the managers get confused when issues/PRs are closed without any comment.
Useless comments beat having them pop into your slack to ask "hey, did you review this?" with a link to an approved PR.
That's why PR should be small. It's much better to have multiple PRs than a single big one.
Totally fair to have gigantic PR full of boilerplate code, but generally you can split the boilerplate and your feature in 2 PRs, where only the feature will get a proper review.
All of this obviously depends on the criticality of the system :p
That can lead to another problem though, which is that if a developer knows a merge is only part of the whole change, it becomes easy to assume any issues will be handled elsewhere.
yeah, fair point. it really only works with standard boilerplate code which is simple enough to not have any issue I guess.. in my case working with a NX monorepo, that would be any code created using the generators
True, but the 10 line change can me merged two weeks from now and it won't make any difference. If you let the 500 line merge languish for two days you'll have screwed up the work of three other programmers.
Really grinds my gears to get a review request for a huge PR with almost no context and no instructions on how to run the code / test data.
I've even had one guy ask for a review who hadn't even manually tested his own codes happy path. Sure he wrote some unit tests and those ran but once you actually tried using the code in the app all kinds of exceptions and weird situations came up. No idea how people dare do that shit.
perhaps we should introduce the PR Wall of Shame for exactly those situations. I mean, obviously you don't want to strangle productivity by naming and shaming people for every single small mistake, but such egregious violations like not even click testing the happy path should be used as an example of what fucking up looks like
Playing the blame game, especially publicly, is a bad way to encourage psychological safety in a community. See Linus Torvalds for examples (he has recently become a bit softer with his feedback).
A better option is to make clear expectations of what a good PR looks like. Then if expectations are not met, you can give 1 on 1 feedback. Don't just blast a noob in public or you can leave emotional scars.
Poor management and time pressure are the answers. It often looks better to push some shit code and then push the fixes later than it does to take twice as long to verify a good change.
I know this implies that the reviewer didn’t care to read the bigger PR, but I think this might actually be legit. If your PR is only 10 lines long, then chances are those lines are very dense, or intricate in some other way. However, if you submit 500 lines, then it’s probably mostly boilerplate code with trivial adaptions.
No-one in their right mind would submit 500 lines of substantial code.
You'd be surprised. Especially if the testing environment is not readily available or if automated tests are not functional and comprehensive, large code changes can be the norm. A developer may habitually hang onto their code until a big chunk is complete, at which point it will take heaps of debugging to uncover where the errors are. This is why we need IaC to quickly create testing environments that closely mirror prod, and trunk-based development to ensure code changes are small and issues are caught as early as possible.
Has no one here ever worked on a new project or even a new feature in a decently sized codebase? Working exclusively in maintenance / minor change mode has to be exhausting.
Depends on what you classify as “minor change”. When I took up my first professional project, I found a plethora of little things to improve which would make users happy. That was very satisfying.
On the other hand, writing yet another module that displays a list of Foos, lets the user create a Foo, show the details of Foo, update it, and delete a Foo, becomes dull quickly, despite being a “new feature”.
I have worked on non-trivial (aka took 10-12 people over a year to even deliver an alpha) greenfield projects, where I literally made the first check-in into the repo.
The only 500+ line PRs I raised was auto generated boilerplate code, or renaming something.
I don't understand the optimism of devs who spend weeks writing code without bothering to test anything they've written. Unless you're writing utterly trivial BS, how does one have this level of confidence in their code? And if you did bother to stop and test, why on god's green earth would you not raise a PR? Why wait till you have thousands of lines of code before asking for feedback?
Depends on what you’re doing. 500 lines in an Angular (a web frontend framework) application gets you a read-only view of a list of entities, maybe searching and filtering.
This can be called the bike shed effect or Law of Triviality. It's not just programming where a simple digestible idea will be contested because it's easy to poke holes, while something more complicated and more consequential is much harder, so it gets little to no resistance.
Y'all need to get yourselves some PR review automation in place. Stop wasting time on trivial reviews and requesting changes for common problems so that when you ping a colleague for a code review, they know it's important rather than a simple request for a thumbs up.