I'm in that group I think. I do like a liiitle bit of coding in some tiny specific progrqmming language in one piece of software that I use. I understand the basics but try to avoid having to do it. But while code is a little scary to me, math is much scarier lol
I believe this group could be bigger than some may think. I, and the team I work with, work with for loops similar to these on a regular basis. And only one of us has a bachelor's degree in math. The rest of us don't really understand the math unless it is applied.
Those of us born in the 70s... Doing anything with a computer required knowing at least a little programming, so we learned at 8 years old, then when we got to high school/college, we were taught by people who knew nothing about programming because they were already old and didn't think they needed to learn anything new...
Not really sure if this answers your question (I agree with you, ultimately), but here’s my experience:
At the college I attended, these sigma/pi expressions weren’t taught until the end of Calculus 2, but I wanted to take an Algorithms class - which had calc 2 as a prerequisite.
I got an exception from my advisor which allowed me to take Algorithms before the pre-req. In my experience, these concepts were easily learned in the context of algorithmic complexity.
Some might be barred from learning important theory in computer science by “brutal” math classes at university. They might find solace in this post which translates sigma into ‘for’
They are the same difficulty level, sure, but that's like saying f(x) and f'(x) are at the same difficulty level. Coming from one to the other in a process is the difficult part, and the code offers instructions to follow this process.
I mean they are both the exact same thing, I don't see why summation is scray when the for loop isn't. It's the same thing written in a short and easy format.
I'm a subscriber to her YouTube(one of my favourite videos of hers) and she has a bunch of videos aimed at helping game developers learn the maths concepts they need for making games, so her audience is mostly people with a coding background, I'm guessing.
So it's less that code is simpler than math notation, more that the maths notation looks scary to people without a maths background, but here's a link to a different complex symbolic abstraction that you might already know
Math notation is just terrible in general because a lot of it is shorthand made up by someone who likes single-letter variables. A symbol you can't type, something above, something below.
A for loop is clear and descriptive.
Or if you're feeling fancy, you could go functional with reduce(add, range(0, 5), 0).
Mathematical notation was designed to be written by hand. It is at least as clear and descriptive as any syntax from a programming language. You're pretending that the abstraction behind a for loop is somehow less than that behind a sum or product notation.
i hate that we all got so frightened about math. it's genuinely fun to learn how it works when you're not being forced to in a school setting, which was just a fucking nightmare for no reason. i had this former navy DI lady teacher in gifted kid algebra [so already a year ahead] yell at me for asking questions; she wasn't going to 'hold my hand' thru the homework, which was quite literally her fucking job
It's surprisingly easy. I used tl give maths tutoring to finance my university degree. What I'd do is let the kids do one exercise task from their school books to see where their difficulties were. While they were on it, I quickly read through the relevant sections in the book, and it was so easy every time that I knew everything I needed to know after a few minutes. Like literally stuff that took weeks at school within minutes.
School just sucks and makes it really hard to learn anything. Almost everything kids learn at school is actually really easy.
Sorry you were put through that. Aggressions are no place for learning
My family and school were god awful at teaching. It was all forced (rote memorisation) learning and not me actually learning. I needed things taught slowly and broken down. I have wanted to learn the more advanced technical maths long ago, but now I am an adult and need to find a safe, quite and gentle environment where i can
anybody reading this, please do not give suggestions or advice in replies. thank you.
i completely agree. this sentiment was echoed pretty well in a (nontechnical and accessible) paper i read a few years ago. he says the current approach is like forcing people to learn music, but only teaching them how to read sheet music and not letting them touch any instruments. it hides the creativity and problem-solving of the discipline and reduces it to memorizing formulas.
Idk man I've been doing my Cal 3 and 4 this semester and fuck me it's hard. Yeah sure it's cool sometimes but wrapping my head around it and often trying to think about things geometrically hurts. I sat there for a full hour trying to figure out why I couldn't picture the equation I was trying to take a triple integral of only to realize it's 4 dimensional and I almost cried
Im sorry you had awful teachers, but not all of them are bad. I had amazing teachers that were very worried for the students to learn. In contrast I had very shitty classmates that just didn't care and would blame the teachers for their laziness.
When you study CompSci (depending on where IG) you tend to see them that way when trying to mathematically prove something about an algorithm. It's only really a good way of thinking if you're into coding, but I don't think a teacher for a non-coding related algebra class should show this, it can be really confusing for some people.
Hi, you can look into "discrete mathematics" if you're interested in the overall subject of math for programmers, it was one of my hardest class but highly intesting!
People who are arguing that one way of expressing these concepts is easier to learn/understand than the other are missing the whole point. Mathematical notation was not designed to teach students how to do math or explain how to design algorithms. It was invented to communicate precise, abstract ideas concisely between mathematicians who already understand what the symbols mean.
Mathematicians require a notation that has the flexibility to manipulate mathematical objects/symbols in a way that naturally emphasizes their properties and relationships. Often they don't even care whether the objects they're studying are even computable or have a numerical representation. They just need them to have certain properties so that they can be manipulated appropriately.
Discrete sums are a rare example of when the mathematical notation overlaps with the description of an algorithm for computing its value (and the overlap is not even complete; infinite sums are easily represented in math notation but are practically uncomputable when implemented naively). Every other advanced mathematical concept puts a premium on ease of symbol manipulation over computability: integrals, derivatives, matrix multiplication, abstract algebra, etc.
TL;DR math notation is complex because its intended audience is people who already understand it, want maximum flexibility of symbol manipulation, and historically didn't really care about practical computation.
You are right the symbols weren't created so students can learn them, but students have to learn them at one point and for me personally, a student that knows how to program, figuring out that these symbols kind of represent for loops made them easier to understand.
These scary large math symbols aren't scary at all and easily explained. The scary parts of maths lie elsewhere. They are discrete, nonlinear or high dimensional and sometimes even the numbers are complex... Or worse.
Yea that's not explained better than a math teach. They just swapped notation common in math, for notation common in one specific programming language. it's only easier for the audience who happens to be familiar with programming in general, and that language in particular.
I think you'd be hard pressed to find someone with any sort of programming background, even just as a hobbyist, who doesn't understand that for loop notation, whether or not they know the specific language it's from. (I couldn't even tell you what specific language that's from, because that notation matches so many different ones.)
I have a 15 year old son; he definitely has not seen summation in math classes yet, but he has far more than enough programming experience (even just from school) to understand the for loop.
not to resurrect a dead thread but yeah anyone remotely into computer science should be able to read pseudo code notation. it being variable in how people write it is part of what makes it a nice tool. this code could work in many contexts in many languages. it’s pretty precise for pseudo code, even.
Instead of jumping from 1 to 2 to 3, we move smoothly across all (typically real) numbers.
Obviously this would go to infinity almost every time because there are infinite real numbers between any two distinct real numbers. So instead, we merge it into a bunch of skinny rectangles with their bottom on the x axis and the top at the value of the function for the start of the rectangle. As we shrink the width of the rectangles, it approaches the continuous notion.
Continuous means “smooth” - there are no jumps
Discrete means there are jump
Short answer: Imagine that the integer used in the for loop is a float instead.
Longer, a bit more precise answer: An integer can only have discrete values (i.e. -1, 0, 1, 2, ..., 69, ... etc.)
A real number (~float with infinite precision) can have an infinite amount of values between two discrete values.
An integral is, to put it simpy, a sum of all the results of taking those infinite values between two discrete values (an interval) and feeding them to the given function.
It's a for loop over an infinite set of real numbers rather than over a finite set of integers => a non-discrete for loop
Just notational difference other than presence of mutation..
How is it harder to understand 3 + 6 + 9 + ... + 3n means compared to the for loop? Is repeated addition hard to grasp?
No it's not harder to grasp, just less concise. Summation and Product notation exist for the same reason we don't say "a discernible but subtle level of humidity" and just use "moist" instead - it's more convenient. People can be taught to readily understand "moist" or the summation notation. It's much harder to teach people to read the longer notation more quickly.
While I acknowledhe that I had some pretty awful math teachers, I would like to add that explaining math concepts in an edited video that you could spend a lot of time making has different demands than babysitting/teaching 30+ students at different levels multiple times a day with little prep time.
The hard part of math isn't understanding esoteric symbols it's the theory behind it and it's application. Number theory will mindbreak almost all people.
Number theory and higher levels of math are a completely different beast. Once your exam is over 50% just writing proofs you will change your tune. Unless you are built for it.
This isn't even god tier, it's just that more people are familiar with the basics of programming than higher level math, which is honestly a good thing.
In a way I always thought coding was more intuitive than maths writing norms.
That is if you speak English. If not, it's as much daunting as weird greek symbols.
I remember how confused I was when I first encountered i=i+1... like, what 🤨? How can this be correct, this thing has to be wrong... and then you start seing the logic behind it and you're like "oooh, yeah, that seems to work... but still, this is wrong on almost every level in math"... and then you grow a bit older and realize that coding has nothing to do with math, instead it's got everything to do with problem solving. If you like to name your variables peach, grape, c*nt, you can, and if that helps you solve the problem, even better, just make it work, i.e. solve the problem 🤷.
But isn:t that kinda true for most things? If you go down deep enough, amost all tasks end up in physics und thus maths somewhere. But if I'm stacking shelves, I don't care that there are some pretty complicated mathy physics things that determine how much weight I can stack on the shelf. I just stack it.
That's kinda how most of programming is related to maths. Yeah, math makes it all run, but I mostly just see maybe a little algebra and very simple boolean logic.
And the rest of my work is following best practices and trying to make sense of requirements.
That's advanced calculus, and my guess is, those notations were made up to give rise to a new field in math, which has more to do with computers than math, so I don't think that counts.
Sorta not really related but Freya's video on splines ("The Continuity of Splines") is a virtually perfect resource if you're interested in learning about... well... splines.
I think gamedev or I guess graphics programming, visualize maths pretty well. I literally quit high school because I could never make any progress in several areas, including math class. But once I read/watch more about gamedev, programming, graphics programming on my own, I got to understand many mathematical terminologies better than I have ever been taught in any school.
I don't know her, so maybe my question is stupid, but does she explain math without using code?
I, honestly, am too stupid to programing, I don't understand it.
I understand summary, not the second one
I've only watched a couple of her videos--on Splines and Bezier curves--and her explanations and animations were intuitive and beautiful to watch, but ultimately her target audience is game devs... So the answer to your question is "technically yes*"
*it's with the intent of learning to code the math
I don't know anything about the original post author, but product notation is the same as summation notation except that instead of adding each new term to the running total, you're multiplying each new term. You don't have to know programming to see from the code samples that the only difference in the code is += vs *= (well, maybe it would help to know that * means multiply; I honestly dont rember how common-knowledge that is).
Sort of; a lot of what she does is computer graphics, which just happens to be applications of math she explains. There is still code, but sometimes the "code" is a flow graph in Unreal Engine or Blender.
The biggest difference (other than the existence of infinity) is that the upper limit is inclusive in summation notation and exclusive in for loops. Threw me for a loop (hah) for a while.
i thought this was pretty weird too when i found out about it. i’m not entirely sure why it’s done this way but i think it has to do with conventions on where to start indexing. most programming languages start their indexing at 0 while much of the time in math the indexing starts at 1, so i=0 to n-1 becomes i=1 to n.
My abstract math professor showed us that sometimes it's useful to count natural numbers from 1 instead of 0, like in one problem we did concerning the relation Q on A = N × N defined by (m,n)Q(p,q) iff m/n = p/q. I don't hate counting natural numbers from 1 anymore because of how commonly this sort of thing comes up in non-computer math contexts.
Definitely, although I’m sure that under the hood it’s all the same. Some (albeit high-level) languages also support a sum function that takes a generator as an input, which seems pretty close to this math notation.
The education system creates scarcity of knowledge to increase the profit of investment and spending, everything complex can be broken down into simple forms.
Everything dealing with capitalism ends up sounding like a conspiracy theory. You're like "of course people wouldn't actually take this thing we, as humans, need and sell it," when suddenly air has been commodified and those who can't afford it are dlseen as not deserving of air.
There's nothing special about a generic for loop (at least in C-like languages). There's no reason you couldn't do something like for (i = 0; true; i++) to make it infinite. Some languages even support an infinite list generator syntax like for i in [0..] (e.g. it lazily generates 0, then 1, then 2, etc. on each iteration) so you can use a for-each style loop to iterate infinitely.
Now, whether or not you should do such things is another question entirely. I won't pretend there aren't any instances where it's useful, but most of the time you're better off with a different structure.