Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)RE
Reptorian @programming.dev
Posts 4
Comments 43
Things You Should Never Do, Part I (2000)
  • For small projects, rewriting is often superb. It allows us to reorganize a mess, apply new knowledge, add neat features and doodads, etc.

    This. I'm coding to contribute to a open-source software with very small amount of coders, and with a non-mainstream Domain-Specific Language. A lot of the code I did before has been proven to work from times to time, but they all could benefit from better outputs and better GUI. So, I end up reengineering the entire and that'll take a really long time, however, I do a lot of tests to ensure it works.

  • Is there a better algorithm for converting big binaries into decimal?
  • I don’t understand your problem well enough to know, if you can (or want to) use this here, but you might be able to tap into that C performance with the radix conversion formatting of printf.

    The problem is printing big binary to decimal. That's not a easy problem because 10 is not a power 2. If we live in a base-hex world, this would be very easy to solve in O(n).

    Also, I can't access that as G'MIC is a language that can't really communicate with other language as it's not meant to share memory.

  • Is there a better algorithm for converting big binaries into decimal?
  • This could be an XY problem, that is, you’re trying to solve problem X, rather than the underlying problem Y. Y here being: Why do you need things to be in decimal in the first place?

    I wouldn't say it's needed, but this is more of a fun thing for me. The only thing I'm using this is for Tupper's Self-Referential formula, and my current approach of converting base 1>>24 to base 1e7 works instantly for 106x17 binary digits. When I load a image to that filter that's greater than somewhere over 256x256, delays are noticeable because the underlying algorithm isn't that great, but it could have to do with the fact that G'MIC is interpretative, and despite the JIT support in it, this is not the kind of problem it's meant to solve (Domain-Specific). On the bright side of thing, this algorithm will work with any data type as long as one data type is one level higher than the other, and in this case, I'm using the lowest level (single and double), and the bigger data type, much faster it can be.

  • Is there a better algorithm for converting big binaries into decimal?

    At the moment, I am stuck with using single-precision float, and double-precision float. So, the maximum represent-able value for single is 1<<24 while for double, it is 1<<53.

    Because of this, I made the following script here - https://gist.github.com/Reptorian1125/71e3eec41e44e2e3d896a10f2a51448e .

    Allow me to clarify on the script above. On the first part, rep_bin2dec does is to return the converted values into the status. So, when I do ${} or variable=${rep_bin2dec\ ???}, I get the status string.

    On the second part, rep_bin2dec_base is the basis for getting rep_bin2dec to work. _rep_bin2dec_base prints the base_10M array into a string.

    So, how does rep_bin2dec_base converts a big binary into big decimal?

    1. If the binary image is less than dimension of 54, then the script will use 0b{} which allows me to directly convert binary to decimal, and 0b is a binary literal much in the same way that Python and C++ does it. From this point, it's pretty obvious on what to do there. So, if it less than dimension 54, this step 1 is pretty much done. If not, move on to step 2.

    2. Convert the binary image as a image of base (1<<24) representing the value of that image. Note that there are two channels "[ output_value , y ]". y in this case represents the digit position in base (1<<24).

    3. Make the converted image as a dynamic array image. This allows us to remove unused digits. You can look at step 2, and step 3 as converting a binary string into an array of base (1<<24) into a dynamic array. Also, note that start_value is stored. That's the very first digit.

    4. Note that the number_of_decimals is the predicted number of characters after conversion of binary to decimal. And the, there's multi-threading that gets activated depending on the size of dynamic array image. decimal_conversion_array_size,result_conversion_array_size is used to define array size as they're temporary arrays to convert from base (1<<24) into base 10M. Finally, there's a new image which is going to be using base 10 million for easy printing, and set is used to add the first digit of base (1<<24) which will then be converted to base 10M.

    5. On eval[-2], we are now processing the base (1<<24) image, and then convert it into base 10M. There's a implicit loop, so you can add a "for y" after begin(), and begin() can be seen as the setup code.

    Some notes, copy() basically allows me to alter an array. In this case, opacity is negative, so it will add the multiplication of the positive opacity. If opacity was between 0-1, then it will get treated similar to how opacity of one layer alters a image. And the multiplication algorithm being used to convert between bases is Schönhage-Strassen multiplication, but without the FFT part.

    So, here how that works. 9 9 x 1 9 _________ 81 81 9 9 _________ 1 8 8 1

    Basically, it's long multiplication, and you can see that there's carrying of the remainder. 81 -> 1 (Remainder 8). 81 + 9 + R8 = 89 + 9 = 8 R ( 1+ 8 ) = 8 R 9. Then 9 + 9 is 18. So, you can see how this results in 1881.

    1. After the conversion to base 10M, depending on your inputs, it'll set the status value to the decimal representation or preserves it as a base 10M for easy printing with _rep_bin2dec_base after alteration.

    There's some more details, but I find it really hard to explain this.

    So, my question is what are some good algorithm to print out huge binaries as decimal? I know Python is insanely good at that, but I can't seem to understand how it does that so well. I know that they do involve conversion to base 2^30 or 1<<30.

    At the moment, I can convert a 90000 digits binary in .35 s, and that's bad to what I seen in Python. It's really bad with 1M binary digits.

    4
    Who's working on a "smaller Rust"?
  • Coming from some one who used 4 different languages (C#, C++, Python, and G'MIC), I just feel more comfortable when there's a explicit end blocks, which is why I don't like Python. Of all of those languages, only Python does not make that explicit end block which is off-putting in my opinion, and there isn't any other options with the similar role to Python.

  • January 2024 monthly "What are you working on?" thread
  • It's a bit of a pain to finish, but I'm basically working on creating an array of numbers to assist in sorting unicode characters, and I'm making string processing commands for the G'MIC scripting language. So, that means by hand, I have to sort hundreds of thousands of characters, and I sorted tens of thousands of them already. I already did string_permutations and you can find string_permutations at index or find index which that permutation can be found. However those commands needs the array of numbers for an additional sorting option I'll do.

  • I made a Brainfuck interpreter within G'MIC (Shell-Like language for image processing)

    Three things before I'll get to the relevant details.

    1. Brainfuck is a esoteric languages which uses 8 characters. I'll leave details here - https://en.wikipedia.org/wiki/Brainfuck
    2. G'MIC is a language largely inspired by bash languages and one other shell scripting language, and partly inspired by C++ for JIT compilation. It's two languages in one as in one outside of JIT and one inside of JIT. It's main purpose is image processing, and it can do 3D things too, basically image-related things. It's turing-complete, so making files has been done with it. Even making a executable compiled program is possible in practice (but, I would point to doing C++ and compile there instead).
    3. I am a G'MIC filters developer.

    Anyways, I taken some time to code up a Brainfuck interpreter within G'MIC. It wasn't that hard to do once I understood what Brainfuck is as a language. I did one earlier than this, but I had to have users define inputs beforehand. Recently, I created rep_cin command to relieve users of doing that, and that is the closest to input() within Python or std::cin via C++.

    Anyways, here's the code to my Brainfuck interpreter: ``` #@cli run_brainfuck_it: brainfuck_file,'_enforce_numbers_input={ 0=false | 1=true },_size_of_array>0 #@cli : Interprets Brainfuck code file within G'MIC brainfuck_interpreter. #@cli : Default values: ,'_enforce_numbers_input=0','_size_of_array=512' run_brainfuck_it: skip ${2=0},${3=512} it $1 _brainfuck_interpreter $2,$3 um run_brainfuck_it,run_brainfuck,_brainfuck_interpreter,_brainfuck_interpreter_byte_input #@cli run_brainfuck: brainfuck_code,'_enforce_numbers_input={ 0=false | 1=true },_size_of_array>0 #@cli : Interprets Brainfuck code within G'MIC brainfuck_interpreter. #@cli : Default values: ,'_enforce_numbers_input=0','_size_of_array=512' run_brainfuck: skip ${2=0},${3=512} ('$1') _brainfuck_interpreter $2,$3 um run_brainfuck_it,run_brainfuck,_brainfuck_interpreter,_brainfuck_interpreter_byte_input _brainfuck_interpreter: # 1. Convert image into dynamic image resize 1,{whd#-1},1,1,-1 ({h}) append y # Convert string images into dynamic image name[-1] brainfuck_code # Name image into brainfuck_code

    # 2. Remove unused characters eval " const brainfuck_code=$brainfuck_code; for(p=h#brainfuck_code-2,p>-1,--p, char=i[#brainfuck_code,p]; if(!(inrange(char,'+','.',1,1)||(find('<>[]',char,0,1)!=-1)), da_remove(#brainfuck_code,p); ); ); if(!da_size(#brainfuck_code), run('error inval_code'); ); da_freeze(#brainfuck_code); "

    # 3. Evaluate brackets eval[brainfuck_code] >" begin(level=0;); i=='['?++level: i==']'?--level; if(level<0,run('error inv_bracks');); end(if(level,run('error inv_bracks');););"

    1x2 # Create 2 images of 1x1x1x1. One image is for storing print out characters, and the other is to allow inputs. _arg_level=1

    # 4. Create JIT code for executing brainfuck code. repeat h#$brainfuck_code { idx:=i[#0,$>]

    if $idx==',' code_str.=run('$0_byte_input[-2]\ $1');ind_list[ind]=i#-2; continue fi if $idx=='.' code_str.=da_push(#-1,ind_list[ind]); continue fi if $idx=='+' code_str.=ind_list[ind]++;ind_list[ind]%=256; continue fi if $idx=='-' code_str.=ind_list[ind]--;ind_list[ind]%=256; continue fi if $idx=='<' code_str.=if(!inrange(--ind,0,$2,1,0),run("'error out_of_bound'");); continue fi if $idx=='>' code_str.=if(!inrange(++ind,0,$2,1,0),run("'error out_of_bound'");); continue fi if $idx=='[' code_str.=repeat(inf,if(!ind_list[ind],break();); continue fi if $idx==']' code_str.=); fi }

    # 5. Execute created JIT code. v + and v - is used to change verbosity level, not part of JIT execution. e[] is used to print into console. v + eval >begin(ind=0;ind_list=vector$2(););$code_str;end(da_freeze(#-1);); v -

    # 6. Print out executed code result v + e[$^] "Brainfuck Output: "{t} v - remove _brainfuck_interpreter_byte_input: repeat inf { wait # For some reason, I had to add this to make this code work!

    if $> rep_cin "Brainfuck Interpreter - Wrong Input! Insert Integer for Argument#"$_arg_level": " else rep_cin "Brainfuck Interpreter - Enter Argument#"$_arg_level" (Integers Only): " fi

    if $1 input:=(${}%208)+_'0' else input=${} fi

    if isint($input) break fi }

    if $1 v + e[$^] "Brainfuck Interpreter Inserted Argument#"$arg_level": "{$input-'0'} v - else input%=256 v + e[$^] "Brainfuck Interpreter Inserted Argument#"$_arg_level": "$input" :: "{$input} v - fi

    _arg_level+=1 f[-1] $input ```

    And the CLI test: C:\Users\User\Documents\G'MIC\Brainfuck Interpreter>gmic "brainfuck_interpreter.gmic" run_brainfuck \">,>,<<++++++[>-------->--------<<-]>[>[>+>+<<-]>[<+>-]<<-]>[-]>+>>++++++++++<[->-[>>>]++++++++++<<+[<<<]>>>>]<-<++++++++++>>>[-<<<->>>]<<<<++++++[>++++++++>[++++++++>]<[<]>-]>>[.<<]<[<<]>>.\",1 [gmic]./ Start G'MIC interpreter (v.3.3.3). [gmic]./ Input custom command file 'brainfuck_interpreter.gmic' (4 new, total: 4806). [gmic]./ Brainfuck Interpreter Inserted Argument#1: 31 [gmic]./ Brainfuck Interpreter Inserted Argument#2: 3 [gmic]./ Brainfuck Output: 93 [gmic]./ End G'MIC interpreter.

    2
    You don't need a map for that
  • Chances are there's probably something similar to dictionary in Python in your languages or at least it's a import/#include away. Although I don't use general programming languages at all, in my used language (G'MIC), I do something like dict$var=input where $var is a defined variable, and this way I can access input by doing ${dict$var} and that's similar to Python dictionary. In C++, there's hash table implementation out there via github. That being said, there are sometimes when you don't need a hashtable dependent on the hashmap, and sometimes, it's just as simple as basic mathematics to access data.

  • Welcome to the Chata programming language
  • Seems like a good idea, I'm hoping that the syntax is sane. As far as languages goes, I think you're missing out on G'MIC to compare as it does have things like FFT and other tools all for image processing which is just part of digital signal processing. And then, there's Python with libraries and so on.

  • *Permanently Deleted*
  • For raster graphics image processing, I'd highly recommend G'MIC. Otherwise, Python and especially for string using regex library. I wish there was a vector graphics version of G'MIC.

  • *Permanently Deleted*
  • I only do raster graphics image processing, so G'MIC it is. A entire coding language and it's a library in of by itself for that.

    On non-DSL, don't have a fave. I'll choose one of these: Python, C++, C#.

  • What programming languages aren't too criticized here?
  • Every languages has their own pitfalls. The answer on picking a language is to pick whatever works for you. There may be even domain-specific languages if you're interested in a domain, and it can be way more flexible than general-purpose solutions for that domain too.

    I use 4 languages.

    1. C++ for adding features to a program.
    2. C# for making .dll for an application (Paint.NET). Kinda similar purpose to what I do with G'MIC, except so much more limited.
    3. Python for processing strings
    4. G'MIC for creating/editing raster graphics images (volumetric too)

    Now, I wish there was a vector equivalent to G'MIC, but there isn't.

  • DSLs are a waste of time
  • Here's my opinion, a well-developed DSL could even be arguably more flexible than say Python even with existing libraries on their specific domains. So, if one is just limited to domains, they may be very well be preferable to general languages.

    I have coded in C#, Python, C++, and currently nearly everyday, G'MIC. Which one of those are a DSL? The last one. What it is? It's a Domain-Specific Language that has been geared toward raster graphics image processing. Why do I use it? Looking at the stack-based processing, commands, built-in mathematical functions. It seems that it has a lot more things that are built-in than say Pillow library for Python and other things. And I only do create images with code, so I am happy with this, and I even did things like Python itertools combinatorics with more things like rank2list/list2rank variation of those combinatorics which aren't image processing by themselves, but can aid to it.

    If I feel that it is way too limited for that Domain, then I wouldn't use it. DSLs are only good if the other options are much more difficult to build with and their flexibility are often enough to entice their audience which is one with limited use cases. Of course, generic languages are usually better even than most DSL even within their domains because of wider support, and wider audience. More DSLs would be better than generics given enough time and support for their domains in my opinion.

  • Is there a text editor that would allow me to create syntax highlighting easily?

    Basically just the title said. The situation is basically I use a Domain-Specific Language called G'MIC, and to this day, I haven't found a satisfactory answer to the issue of lack of syntax highlighting. At the moment, I am using KDE Kate as it's pretty good at structuring the code with their find/replace feature, tab indicators, and multi-window support.

    17