https://arxiv.org/pdf/2506.11035

(this sounds insane bullshit to me tbh because given n dimensional vector for an object you can already find likeness for each dims if you count the dims as a feature) this is talking about asymmetrical representations of vectors and their objects in similarity they start by defining what tversky’s similarity is like. rather than the object itself having similarites rather features of it can be similar and dissimilar and these entire objects are actually sets .

representation of objects as vectors and sets each object can be any vector and assuming a universe we have a set of features and is that representation for that object. and the object itself can be a set here features are separate learned vectors and not dims of the vector set.

salience: (here salient means which object would be more complicated or feature rich) they say that given a relation between two objects, the one with lesser features is more similar to the one with larger salience. i think this is because the higher probability of one containing the other in it’s feature store and more. they also describe an equation for this salience.

feature set intersections

this is important to understand to find how many features do two object have in common what they do is, they find out the for each feature and they find out if that object vector if it has the feature vector if more than 0. then they calculate an aggreate using

and then they have a similiar exploration for feature difference they have two types of difference value ignorematch : sum of magnitudes of features not in but in value subtract match : sum of excess features present in both but stronger

tversky neural network modules

finally something usefull. they make two networks, similiarity layer (removing cosine similarity or dot product) and projection layer (replacing the standard projection layer).

tversky similarity network

sooo basically they have an encoding model converting or making our vectors lets say vector now we need to find the tversky similarity network… soo what the equation is is this where are learned scalers and intersection and difference are the formulas from before the feature bank is a learned what do you call i think a neural network defining the n number of features that we can use. i think this is enough information to work off of to make the model.

e2e encoder