Amazing talk as always, and Java for the next years will be even more relevant than ever before with such amazing leaps forward. Thank you Roman, Thomas and everyone related to Lilliput.
@renbangbprd7236Ай бұрын
Bring the low memory usage, I hope lilliput+leyden+valhalla can reduce java memory usage by ~50%
@janwiemer8053Ай бұрын
Hi! Thx for the insights on project Lillipu! Great talk! One Question regarding two space GC: If the copied objects would exceed the size of the space may it be an idea not to copy them at all? This would not safe memory anyway.
@prdoyleАй бұрын
For far classes, did you consider putting the klass slot closer to the start of the object so it is in the same page (or even cache line) as the header? The klass pointer is likely to be one of the most important fields for performance, right?
@Barteks2xАй бұрын
In case you have superclasses that already exist, you would have to change object field layout after instances of it have been made which is definitely not going to work. Otherwise they probably do put it right at the beginning of the class being currently loaded, but after superclass fields, because superclass fields have to stay in the same offset in subclass (this allows you to have fast polymorphic field access, the fields are in the same place regardless of whether it's the parent class, or any subclass)
@RasaelBerviniАй бұрын
excuse me, but how do you guarantee that after GC, when an address is reused for a new object, it won't cause identity hashcode conflicts when GC'd? an application relying on identity hash code (rmi for example) may fall into nasty bugs if running for a long time. very interesting talk btw, thank you!
@ExEBossАй бұрын
This gets explained starting at 33:40.
@RasaelBerviniАй бұрын
@@ExEBoss thank you, yes I was referring to that part. Since the identity hashcode is computed based on the object memory address, and then made 'final' when the object is moved by the GC; what happens when another object is instantiated at the same (original) object address, and the identity hashcode computed. Then we have two different instances with the same identity hash code right? var a = new Object(); // no hashcode computed, object address 0xA System.identityHashCode(a); // hashcode is computed based on object address System.gc(); // assuming 'a' i moved and the identity hashcode is now among the object data. The original address 0xA is now free var b = new Object(); // it gets address 0xA, and has the same identityHashCode as object 'a'
@vinterskugge907Ай бұрын
Hash codes are not guaranteed to be unique. If an application is depending on the uniqueness of hash codes, whether from System.identityHashCode() or from any hashCode() implementation, that is a bug in the application.
@nisonaticАй бұрын
@@RasaelBervini From the general contract of .hashCode: _it is not required that if two objects are unequal according to the equals method, then calling the hashCode method on each of the two objects must produce distinct integer results._ And Object.hashCode specifically states, _Implementation Requirements: As far as is reasonably practical, the hashCode method defined by class Object returns distinct integers for distinct objects._ So the answer is that you *can* get the same value for both objects, and they never guaranteed that hashcodes would always be distinct. FWIW, it's always been possible to have billions of objects and simply overwhelm the size of int. A HashMap of objects that don't override hashCode / equals will still work. 'a' and 'b' from your example will be put in the same bucket, but when you say map.get(b), it will fail the == test with a and pass the == test with b, returning the value you expected.
@JosuaKrauseАй бұрын
@@RasaelBervini System.identityHashCode is not guaranteed to be unique. The only guarantee is that if the hash is different the object is also different. If you need a unique hash code you could try using uuid4 and store that in the object
@denis_iiiАй бұрын
Using a half sized pointer as an object header seems a bit overengineering. 64-bit looks as good compromise for the future extending and optimisations. The speaker's presentation is superb.
@lobaornАй бұрын
Not sure what you meant by half sized pointer or overengineering, if we can have a 32-bit header why should we prefer 64-bit header? For example, heaps with billions of small objects (telemetry for example) would benefit heavily since they would have a higher proportion of header size vs object size. Foundational software like the JVM is mostly "overengineered" most of the time, so that the users can benefit from it...
@denis_iiiАй бұрын
@@lobaorn All access to the object header must be aligned to int64 by any modern compilers. There is no such difference between 32/64, if you have 2-4 real fields. Even if you collect telemetry, you will have an int64 timestamp + int64 value, the header size does not matter due to the alignment being to close 64. All boxing values will also gone from collections in the near future. There are no 1-field objects without a hashcode in modern Java.
@StefanReichАй бұрын
@@denis_iii With a 32 bit header instead of a 64 bit header, you can put 32 bits of additional user data into the object. Not sure what you are criticizing here...
@Barteks2xАй бұрын
A 4 byte header allows an object with 4 bytes of field(s) to fit in 8 bytes. This literally makes it possible to fit whole object into one register sometimes
@RulestormerАй бұрын
@denis_iii You are missing the point of Project Lilliput. The goal is to reduce memory consumption, which is good in itself and will help with performance as well, as you'll be able to fit more objects / more information in the same cache than before.