f.haeder.net

Search

Items tagged with: Java

I'm moving to the #RedHat #OpenJDK team https://jmtd.net/log/openjdk/ preparing for #ibm takeover? #java
 
#ouch - ich war heute bei #inbev - und "musste" mit #windoze und #java #js #bullshit anhöhren.... mit is immer noch übel .....
cybertreehouse 
Das JVM-Framework Micronaut eignet sich sowohl für Cloud-Native als auch für Webanwendungen: Ist es eine Alternative zu Grails und Spring Boot? #Java #Micronaut #Webentwicklung
 

Why is 2 bin build certbot.log convert cookie core go hnpostranks http keys.txt mysql nginx nginxhtml.tar.gz nginxstuffs.tar.gz testfile tmp updategogs (i bin build certbot.log convert cookie core go hnpostranks http keys.txt mysql nginx nginxhtml.tar.gz nginxstuffs.tar.gz testfile tmp updategogs i) faster than 2 bin build certbot.log convert cookie core go hnpostranks http keys.txt mysql nginx nginxhtml.tar.gz nginxstuffs.tar.gz testfile tmp updategogs i bin build certbot.log convert cookie core go hnpostranks http keys.txt mysql nginx nginxhtml.tar.gz nginxstuffs.tar.gz testfile tmp updategogs i in Java?


The following Java program takes on average between 0.50s and 0.55s to run: public static void main(String []args) { long startTime = System.nanoTime(); int n = 0; for (int i = 0; i <
Article word count: 1790

HN Discussion: https://news.ycombinator.com/item?id=18573308
Posted by trequartista (karma: 801)
Post stats: Points: 148 - Comments: 19 - 2018-11-30T22:26:14Z

\#HackerNews #bin #build #certbotlog #convert #cookie #core #faster #hnpostranks #http #java #keystxt #mysql #nginx #nginxhtmltargz #nginxstuffstargz #testfile #than #tmp #updategogs #why
Article content:

Image/photo

There is a slight difference in the ordering of the bytecode.

2 * (i * i):

iconst_2 iload0 iload0 imul imul iadd

vs 2 * i * i:

iconst_2 iload0 imul iload0 imul iadd

At first sight this should not make a difference; if anything the second version is more optimal since it uses one slot less.

So we need to dig deeper into the lower level (JIT)^1.

Remember that JIT tends to unroll small loops very aggressively. Indeed we observe a 16x unrolling for the 2 * (i * i) case:

030 B2: # B2 B3 <- B1 B2 Loop: B2-B2 inner main of N18 Freq: 1e+006
030 addl R11, RBP # int
033 movl RBP, R13 # spill
036 addl RBP, #14 # int
039 imull RBP, RBP # int
03c movl R9, R13 # spill
03f addl R9, #13 # int
043 imull R9, R9 # int
047 sall RBP, #1
049 sall R9, #1
04c movl R8, R13 # spill
04f addl R8, #15 # int
053 movl R10, R8 # spill
056 movdl XMM1, R8 # spill
05b imull R10, R8 # int
05f movl R8, R13 # spill
062 addl R8, #12 # int
066 imull R8, R8 # int
06a sall R10, #1
06d movl [rsp + #32], R10 # spill
072 sall R8, #1
075 movl RBX, R13 # spill
078 addl RBX, #11 # int
07b imull RBX, RBX # int
07e movl RCX, R13 # spill
081 addl RCX, #10 # int
084 imull RCX, RCX # int
087 sall RBX, #1
089 sall RCX, #1
08b movl RDX, R13 # spill
08e addl RDX, #8 # int
091 imull RDX, RDX # int
094 movl RDI, R13 # spill
097 addl RDI, #7 # int
09a imull RDI, RDI # int
09d sall RDX, #1
09f sall RDI, #1
0a1 movl RAX, R13 # spill
0a4 addl RAX, #6 # int
0a7 imull RAX, RAX # int
0aa movl RSI, R13 # spill
0ad addl RSI, #4 # int
0b0 imull RSI, RSI # int
0b3 sall RAX, #1
0b5 sall RSI, #1
0b7 movl R10, R13 # spill
0ba addl R10, #2 # int
0be imull R10, R10 # int
0c2 movl R14, R13 # spill
0c5 incl R14 # int
0c8 imull R14, R14 # int
0cc sall R10, #1
0cf sall R14, #1
0d2 addl R14, R11 # int
0d5 addl R14, R10 # int
0d8 movl R10, R13 # spill
0db addl R10, #3 # int
0df imull R10, R10 # int
0e3 movl R11, R13 # spill
0e6 addl R11, #5 # int
0ea imull R11, R11 # int
0ee sall R10, #1
0f1 addl R10, R14 # int
0f4 addl R10, RSI # int
0f7 sall R11, #1
0fa addl R11, R10 # int
0fd addl R11, RAX # int
100 addl R11, RDI # int
103 addl R11, RDX # int
106 movl R10, R13 # spill
109 addl R10, #9 # int
10d imull R10, R10 # int
111 sall R10, #1
114 addl R10, R11 # int
117 addl R10, RCX # int
11a addl R10, RBX # int
11d addl R10, R8 # int
120 addl R9, R10 # int
123 addl RBP, R9 # int
126 addl RBP, [RSP + #32 (32-bit)] # int
12a addl R13, #16 # int
12e movl R11, R13 # spill
131 imull R11, R13 # int
135 sall R11, #1
138 cmpl R13, #999999985
13f jl B2 # loop end P=1.000000 C=6554623.000000

We see that there is 1 register that is "spilled" onto the stack.

And for the 2 * i * i version:

05a B3: # B2 B4 <- B1 B2 Loop: B3-B2 inner main of N18 Freq: 1e+006
05a addl RBX, R11 # int
05d movl [rsp + #32], RBX # spill
061 movl R11, R8 # spill
064 addl R11, #15 # int
068 movl [rsp + #36], R11 # spill
06d movl R11, R8 # spill
070 addl R11, #14 # int
074 movl R10, R9 # spill
077 addl R10, #16 # int
07b movdl XMM2, R10 # spill
080 movl RCX, R9 # spill
083 addl RCX, #14 # int
086 movdl XMM1, RCX # spill
08a movl R10, R9 # spill
08d addl R10, #12 # int
091 movdl XMM4, R10 # spill
096 movl RCX, R9 # spill
099 addl RCX, #10 # int
09c movdl XMM6, RCX # spill
0a0 movl RBX, R9 # spill
0a3 addl RBX, #8 # int
0a6 movl RCX, R9 # spill
0a9 addl RCX, #6 # int
0ac movl RDX, R9 # spill
0af addl RDX, #4 # int
0b2 addl R9, #2 # int
0b6 movl R10, R14 # spill
0b9 addl R10, #22 # int
0bd movdl XMM3, R10 # spill
0c2 movl RDI, R14 # spill
0c5 addl RDI, #20 # int
0c8 movl RAX, R14 # spill
0cb addl RAX, #32 # int
0ce movl RSI, R14 # spill
0d1 addl RSI, #18 # int
0d4 movl R13, R14 # spill
0d7 addl R13, #24 # int
0db movl R10, R14 # spill
0de addl R10, #26 # int
0e2 movl [rsp + #40], R10 # spill
0e7 movl RBP, R14 # spill
0ea addl RBP, #28 # int
0ed imull RBP, R11 # int
0f1 addl R14, #30 # int
0f5 imull R14, [RSP + #36 (32-bit)] # int
0fb movl R10, R8 # spill
0fe addl R10, #11 # int
102 movdl R11, XMM3 # spill
107 imull R11, R10 # int
10b movl [rsp + #44], R11 # spill
110 movl R10, R8 # spill
113 addl R10, #10 # int
117 imull RDI, R10 # int
11b movl R11, R8 # spill
11e addl R11, #8 # int
122 movdl R10, XMM2 # spill
127 imull R10, R11 # int
12b movl [rsp + #48], R10 # spill
130 movl R10, R8 # spill
133 addl R10, #7 # int
137 movdl R11, XMM1 # spill
13c imull R11, R10 # int
140 movl [rsp + #52], R11 # spill
145 movl R11, R8 # spill
148 addl R11, #6 # int
14c movdl R10, XMM4 # spill
151 imull R10, R11 # int
155 movl [rsp + #56], R10 # spill
15a movl R10, R8 # spill
15d addl R10, #5 # int
161 movdl R11, XMM6 # spill
166 imull R11, R10 # int
16a movl [rsp + #60], R11 # spill
16f movl R11, R8 # spill
172 addl R11, #4 # int
176 imull RBX, R11 # int
17a movl R11, R8 # spill
17d addl R11, #3 # int
181 imull RCX, R11 # int
185 movl R10, R8 # spill
188 addl R10, #2 # int
18c imull RDX, R10 # int
190 movl R11, R8 # spill
193 incl R11 # int
196 imull R9, R11 # int
19a addl R9, [RSP + #32 (32-bit)] # int
19f addl R9, RDX # int
1a2 addl R9, RCX # int
1a5 addl R9, RBX # int
1a8 addl R9, [RSP + #60 (32-bit)] # int
1ad addl R9, [RSP + #56 (32-bit)] # int
1b2 addl R9, [RSP + #52 (32-bit)] # int
1b7 addl R9, [RSP + #48 (32-bit)] # int
1bc movl R10, R8 # spill
1bf addl R10, #9 # int
1c3 imull R10, RSI # int
1c7 addl R10, R9 # int
1ca addl R10, RDI # int
1cd addl R10, [RSP + #44 (32-bit)] # int
1d2 movl R11, R8 # spill
1d5 addl R11, #12 # int
1d9 imull R13, R11 # int
1dd addl R13, R10 # int
1e0 movl R10, R8 # spill
1e3 addl R10, #13 # int
1e7 imull R10, [RSP + #40 (32-bit)] # int
1ed addl R10, R13 # int
1f0 addl RBP, R10 # int
1f3 addl R14, RBP # int
1f6 movl R10, R8 # spill
1f9 addl R10, #16 # int
1fd cmpl R10, #999999985
204 jl B2 # loop end P=1.000000 C=7419903.000000

Here we observe much more "spilling" and more accesses to the stack [RSP + ...], due to more intermediate results that need to be preserved.

Thus the answer to the question is simple: 2 * (i * i) is faster than 2 * i * i because the JIT generates more optimal assembly code for the first case.

But of course it is obvious that neither the first nor the second version is any good; the loop could really benefit from vectorization, since any x86-64 CPU has at least SSE2 support.

So itʼs an issue of the optimizer; as is often the case, it unrolls too aggressively and shoots itself in the foot, all the while missing out on various other opportunities.

In fact, modern x86-64 CPUs break down the instructions further into micro-ops (µops) and with features like register renaming, µop caches and loop buffers, loop optimization takes a lot more finesse than a simple unrolling for optimal performance. [1]According to Agner Fogʼs optimization guide:
The gain in performance due to the µop cache can be quite considerable if the average instruction length is more than 4 bytes. The following methods of optimizing the use of the µop cache may be considered:

\* Make sure that critical loops are small enough to fit into the µop cache.
\* Align the most critical loop entries and function entries by 32.
\* Avoid unnecessary loop unrolling.
\* Avoid instructions that have extra load time
. . .

Regarding those load times - [2]even the fastest L1D hit costs 4 cycles, an extra register and µop, so yes, even a few accesses to memory will hurt performance in tight loops.

But back to the vectorization opportunity - to see how fast it can be, [3]we can compile a similar C application with GCC, which outright vectorizes it (AVX2 is shown, SSE2 is similar)^2:

vmovdqa ymm0, YMMWORD PTR .LC0 [rip]vmovdqa ymm3, YMMWORD PTR .LC1 [rip]xor eax, eax vpxor xmm2, xmm2, xmm2
.L2: vpmulld ymm1, ymm0, ymm0 inc eax vpaddd ymm0, ymm0, ymm3 vpslld ymm1, ymm1, 1 vpaddd ymm2, ymm2, ymm1 cmp eax, 125000000 ; 8 calculations per iteration jne .L2 vmovdqa xmm0, xmm2 vextracti128 xmm2, ymm2, 1 vpaddd xmm2, xmm0, xmm2 vpsrldq xmm0, xmm2, 8 vpaddd xmm0, xmm2, xmm0 vpsrldq xmm1, xmm0, 4 vpaddd xmm0, xmm0, xmm1 vmovd eax, xmm0 vzeroupper

With run times:
\* SSE: 0.24 s, or 2 times faster.
\* AVX: 0.15 s, or 3 times faster.
\* AVX2: 0.08 s, or 5 times faster.

^1 [To get JIT generated assembly output, [4]get a debug JVM and run with -XX:+PrintOptoAssembly]

^2 [The C version is compiled with the -fwrapv flag, which enables GCC to treat signed integer overflow as a twoʼs-complement wrap-around.]

References

Visible links
1. https://www.agner.org/optimize/microarchitecture.pdf
2. https://stackoverflow.com/questions/4087280/approximate-cost-to-access-various-caches-and-main-memory
3. https://gcc.godbolt.org/z/DdEDny
4. https://github.com/ojdkbuild/ojdkbuild/releases

HackerNewsBot debug: Calculated post rank: 105 - Loop: 203 - Rank min: 100 - Author rank: 92
 
#ibm #redhat on the technological transition of #OpenJDK

https://www.redhat.com/en/blog/technological-transition-openjdk #java #programming
 
Governikus: Personalausweis-Webanwendungen lassen sich austricksen #E-Personalausweis #Java #Programmiersprache #RFID #Sicherheitslücke #Applikationen #Internet #PolitikRecht #Security
 
vor 60 Jahren, am 18. November 1958, wurde #plasticant #plastikant von dem #Ungarn Jenö #Paksy patentiert. In den 1960er und 1970er Jahren hatte das System- #Spielzeug große Verbreitung und wird noch heute unter dem Namen #Jáva in Unganrn vertrieben
Plasticant – 60er und 70er Jahre Spielzeug
 

Simple table size estimates and 128-bit numbers (Java Edition)


#bot #daniellemire #java
Simple table size estimates and 128-bit numbers (Java Edition)

Daniel Lemire's blog: Simple table size estimates and 128-bit numbers (Java Edition) (Daniel Lemire)

 
Java-Programmierer für komplexe Lager-Logistik-Software und stolze Mama von vierjährigen Zwillingen. Geht nicht? Geht doch! #Beruf #Familie #IT-Branche #Java
 
When this #microsoft #propaganda site mentions #java it's actually promoting Microsoft's lock-in, proprietary AD
 
Corretto OpenJDK: Amazon veröffentlicht eigene freie Java-Distribution #Java #AWS #Amazon #CloudComputing #JamesGosling #OpenJDK #Programmiersprache #Applikationen #OpenSource #Softwareentwicklung
 
Amazon hat die Sorgen vieler Entwickler bezüglich eines kostenpflichtigen Langzeitsupports für Java zum Anlass genommen, ein eigenes OpenJDK anzubieten. #AWS #Amazon #AmazonCorretto #Java #OpenJDK
 
 
Migrating from Oracle #JDK to #OpenJDK on Red Hat Enterprise Linux: What you need to know

https://developers.redhat.com/blog/2018/11/05/migrating-from-oracle-jdk-to-openjdk-on-red-hat-enterprise-linux-what-you-need-to-know/ #ibm #oracle #java
Migrating from Oracle JDK to OpenJDK on Red Hat Enterprise Linux: What you need to know
 
Hey everyone, I’m #newhere. I’m interested in #golang, #java, #linux, and #programming. Currently working as a web app Java dev but looking into golang as a possible future language
 

Why I love Common Lisp and hate Java (2012)


“Common what?” is a common reply I get when I mention Common Lisp. Perhaps rightly so, since Common Lisp is not all that common these days. Developed in the sixties, it is one of the ol…
Article word count: 984

HN Discussion: https://news.ycombinator.com/item?id=18373159
Posted by pmoriarty (karma: 29066)
Post stats: Points: 76 - Comments: 88 - 2018-11-03T23:53:10Z

\#HackerNews #2012 #and #common #hate #java #lisp #love #why
Article content:

Image/photo

[1][IMG]“Common what?” is a common reply I get when I mention Common Lisp. Perhaps rightly so, since Common Lisp is not all that common these days.

Developed in the sixties, it is one of the oldest programming languages out there. In its heydays it was used mostly for Artificial Intelligence research at MIT, Stanford, Carnegie Mellon and the like, and therefore has a lingering association with AI. People not in AI shy away from Lisp. Common Lisp is a powerful and versatile programming language that can and should be used more often in other paradigms. It saddens me to see that Common Lisp does not even make the [2]Top 25 most popular languages on Github.

A year and a half ago, a dear [3]friend of mine send me [4]this link. It changed the course of my life, although I shouldn’t extrapolate too far into the future. That was the first time I ever heard of Lisp and it hasn’t stopped enthralling me. After reading a couple of Paul Graham’s [5]great [6]essays, my [7]curiosity forced me to spend the last year full-time learning and hacking away in Common Lisp. Aren’t those claims kind of bold? I had to find out. I am still new to Common Lisp but I love it enough to blog about it.

At university I was spoon-fed Java. Nobody likes to be spoon-fed. My degree (Computer Science & Economics) was about applying IT to business problems and I guess ubiquity was their guiding star. Not unwise to prepare not independent-thinking students for a job as a software engineer in a large corporation, but since the program was designed for students with an interest in applying IT applications to solve business problems, teaching Java killed it.. quite literally. Of course basic programming skills are essential to the curriculum, but why Java? It’s stressful!

About a half of my classmates dropped out after one year, because they said they didn’t like the programming part. That number rose in the subsequent years and they cancelled the program completely when I graduated. A sad ending for what I thought was a great program (except for the Java part).

Lispers like to pick on Java. It makes sense, because Common Lisp has everything that Java lacks. Common Lisp makes programming fun again. I can’t make direct comparisons with other languages, since I don’t know them well enough. But as far as I know, Lisp’s strengths are unique. You should look into it if you tend to spend 70% of your time in the test-debug-test cycle, 20% of your time writing unnecessary code and remain only 10% productive.

People say Common Lisp is difficult to learn, but I had more trouble learning Java. My first few spoons of Java were to memorize “public static void main(String args[])”, before I could start playing with some code. Oh and I forgot, you have to declare your class first. What is a class, and why so many lines, just to produce a simple “Hello World”? Programming sucks!

Compare

class HelloWorldApp { public static void main(String []args) { System.out.println("Hello World!"); } }

with Common Lisp’s equivalent:

"Hello World"

That’s right. Just type that in the REPL and Common Lisp will return “Hello World”. As simple as it should be. REPL is the Read-Eval-Print loop that Java is missing. The REPL allows for interactive programming in a way that will keep you “wired in” as you hack and debug. You are in a constant conversation with your software. Compare talking on the phone versus texting when you’re trying to tell your friend about Common Lisp.

Beside the REPL, another impressive feat of Common Lisp is meta-programming. You can write software to write software. Say what? Yeah, that was my initial reaction too, but Common Lisp’s macro system allows you to write functions that return code snippets. It completely redefines the word. Or more accurately, re-redefines, since Lisp is much older than MS Office, which is what most people associate macros with, sadly.

Claims like these have been made all over the Internet. But for a skeptic like myself — who spent 4 months intensively practicing Tai-Chi in Beijing, just to confirm the existence of “[8]qi” — I just had to try it for myself. “I cannot tell you what qi feels like, just like I cannot tell you what something tastes like. You have to try it for yourself,” my master told me.

When I was writing my thesis on a topic related to the [9]Vehicle Routing Problem, I was forced to use Java. I was never given any choice — or the illusion of it. I spent 80% of my time on anything but developing my ideas into prototype software. I recently re-wrote the library in Common Lisp and released it [10]open-source. I’ve been working on it just over a month, and I did so with a big smile. I couldn’t bother counting the number of lines, but to give you an idea, my Java source code was 500.4 kb versus 78.8 kb in Common Lisp. The Java source code is not disclosed — it’s too embarrassing.

For people in the Operations Research industry — broadly trained mathematicians with a knack for applying their skills in practice, who lack the training, patience and passion for the nitty-gritty of Computer Science — [11]Common Lisp is the answer. Prototyping to test an idea or algorithm — to get quick results for your publication if you will — would not and should not have to incur the overhead of “researching”. Common Lisp allows you to focus on what OR people do best; designing that algorithm.

Just like I thought the all-you-can-eat sushi in Rotterdam was good; a complete paradigm shift when I got to Hong Kong.

Discussion on [12]Hacker News and [13]Reddit

[14]Continue to Part II – code examples

For those interested to learn a little bit more, this is how I learned Lisp:

References

Visible links
1. https://kuomarc.files.wordpress.com/2012/01/lisp-glossy.jpg
2. https://github.com/languages/Common%20Lisp
3. http://about.me/joshchia
4. http://www.paulgraham.com/avg.html
5. http://lib.store.yahoo.net/lib/paulgraham/acl1.txt
6. http://www.paulgraham.com/articles.html
7. http://wp.me/p2arfK-2l
8. http://en.wikipedia.org/wiki/Qi
9. http://en.wikipedia.org/wiki/Vehicle_routing_problem
10. https://github.com/mck-/Open-VRP
11. https://kuomarc.wordpress.com/2012/03/05/the-uncommon-lisp-approach-to-operations-research/
12. http://news.ycombinator.com/item?id=3525927
13. http://www.reddit.com/r/lisp/comments/p1o46/why_i_love_common_lisp_and_hate_java/
14. https://kuomarc.wordpress.com/2012/02/02/why-i-love-common-lisp-and-hate-java-part-ii-code-examples/

HackerNewsBot debug: Calculated post rank: 80 - Loop: 80 - Rank min: 80 - Author rank: 64
 
 
 
 
Yes, #microsoft at a meager 10 percent. This is why these criminals are buying #github and putting in charge the man who kills #java projects like #robovm
 
Image/photo
Bonjour à tous, je suis nouveau ici!

Hey everyone, I am #newhere and #french.

I am interested in #humor, #horrorstories, #art, #photos, #java, #translation, #programming, #lingustics, #tech, #tor, #xmpp.
 
Unser Blogger Lars Röwekamp berichtet von der neuen Oracle Code One, die die Nachfolge der JavaOne angetreten ist. Sein Resümee fällt positiv aus. #Java
#Java
 
Micronaut ist ein neues Framework, das sich für das Erstellen Cloud-nativer JVM-Microservices eignet. #Java #Micronaut #Microservices
 
Oops. I said #javavm but meant #robovm

Either way, #microsoft killed a key #java and #android project. Using its #mono -pushing proxy #xamarin (run by Nat Friedman and Miguel de Icaza)
 
Spannender als die Änderungen am aktuellen MicroProfile-Release ist die Vorschau auf Version 2.2 und darüber hinaus. #Java #MicroProfile #Microservices
 
Enterprise Java caretakers float new rules of engagement for future feature updates https://www.theregister.co.uk/2018/10/18/enterprise_java_community_process/ #java #programming
 
First Contact with Switch Expressions in #Java 12: https://youtu.be/1znHEf3oSNI — with a beer at the edge of a cliff into the sunset
 
#redhat on "The history and future of OpenJDK" https://www.redhat.com/en/blog/history-and-future-openjdk #OpenJDK very important to #java devs like me...
 
Das Hamburger Modehaus startet eine Coding Challenge, die sich explizit an Mädchen und Frauen in einem IT-nahen Studium oder Ausbildungsprogramm richtet. #Diversity #Java #Programmierwettbewerb #Python
 
#Hedera needs to #deletegithub ASAP
#github manager has a history of killing #java projects like Hedera's. See how he killed #robovm
Then #microsoft RE-hired him
Hedera Hashgraph releases open source SDK 
 
#techrights documents the nasty ways in which #microsoft killed #java projects through its 'proxies' and Miguel de Icaza, Nat Friedman (soon #github chief) http://techrights.org/wiki/index.php/Xamarin company of criminals...
 
#Hedera needs to #deletegithub ASAP
It has a #java #sdk and the new boss of #github is the man whose firm (as CEO) #xamarin attacked and shut down Java projects like #roboVM
 
Hola a todos, soy Jaime. Tengo interés en #golang, #java, #php, #symfony, #tec, #tech y #tecnologia.
Gracias por compartir esta red conmigo. Espero conocer mucha gente a la que le gusten las mismas cosas que a mi (o parecidas).
 
Java: Microsoft legt Teile von Minecraft-Code offen #Minecraft #Java #Programmiersprache #Microsoft #Applikationen #OpenSource #Softwareentwicklung
 
newer older