Gravity Sort : Is this possible programmatically?

0 votes
asked Mar 21, 2010 by bragboy

I've been thinking recently on using the Object Oriented design in the sorting algorithm. However I was not able to find a proper way to even come closer in making this sorting algorithm that does the sorting in O(n) time.

Ok, here is what I've been thinking for a week. I have a set of input data. I will assign a mass to each of the input data (assume input data a type of Mass and a type of Sphere. If we assume all objects to be perfectly spherical objects with shapes proportional to their mass, the heaviest one touches the ground first.). I will be placing all my input data in the space all at same distance from earth. And I will make them free fall. According to gravitational law, the heaviest one hits the ground first. And the order in which they hit will give me the sorted data. This is funny in some way, but underneath I feel that this should be possible using the OO that I have learnt till date

Is it really possible to make a sorting technique that uses gravitational pull like scenario or am I stupid/crazy?

Edit: Turns out all objects hit the ground at the same time hence I introduced the spherical object concept.

19 Answers

0 votes
answered Jan 10, 2010 by edgerunner

from Debilski's answer:

I’ll adapt your idea a little. We do have our objects but they don’t differ in weight, but in speed. So, at the beginning all objects are aligned at the starting line and on the starting shot, they’ll all move with their respective velocities to the finish.

Clear enough: First object in finish will emit a signal, saying it is there. You catch the signal and write to the results paper who it was

I'd simplify it a step further and say these objects are random positive integers. And we want to sort them in an ascending order as they approach zero, so they have a distance from zero d which initially is equal to the integer itself.

Assuming the simulation takes place in discrete time steps i.e. frames, in every frame, every object's new distance would be: d = d - v and an object would get added to the list when its d ≤ 0. That is one subtraction and one conditional. Two discrete steps for each object, so the calculations seem to be O(n): linear.

The catch is, it is linear for one frame only! It is multiplied by the number of frames f required to finish. The simulation itself is O(nf): quadratic.

However, if we take the duration of a frame as the argument t we get the ability to affect the number of frames f in an inversely proportional manner. We can increase t to reduce f but this comes at the cost of accuracy, the more we increase t, the more the probability that two objects will finish in the same time frame therefore be listed as equivalent, even if they are not. So we get an algorithm with adjustable accuracy (as it is in most finite element simulation contexts)

We can further refine this by turning it into an adaptive+recursive algorithm. In human code it would be:

function: FESort: arguments: OriginalList, Tolerance
  define an empty local list: ResultList

  while OriginalList has elements
    define an empty local list: FinishedList
    iterate through OriginalList
      decrement the distance of each object by Tolerance
      if distance is less than or equal to zero, move object from OriginalList to FinishedList

    check the number of elements in FinishedList
      when zero
        set Tolerance to double Tolerance
      when one
        append the only element in FinishedList to ResultList
      when more
        append the result of FESort with FinishedList and half Tolerance to ResultList

  return ResultList

I'm wondering if there's any real similar implementation that works for someone.

Interesting subject indeed :)

PS. The pseudocode above is my idea of pseudocode, please feel free to rewrite it in a more legible or conformant way if there is one.

0 votes
answered Jan 13, 2010 by user340730
0 votes
answered Jan 21, 2010 by seth

Once you've calculated all the times they take to hit the ground, you'll still have to sort those values. You're not really gaining anything, you're just sorting different numbers after performing some extra unnecessary calculation.

EDIT: Whoops. Forgot physics 101. Of course they'll all hit at the same time. :)

Any sort of modeling like this just transforms one sorting problem into another one. You won't gain anything.

0 votes
answered Jan 21, 2010 by osgx

Gravitation calculation is free of charge only in real world. In programm you need to model it. It will be another n in complexity minimum

0 votes
answered Jan 21, 2010 by james-barrass

The idea might seem simple, but the difference between the real world and the modeled one in this case is that in the real world everything is happening in parallel. To model the gravitational sort as you describe you would have to start by thinking each object on a separate thread with a thread safe way to add them to a list in the order they finish. Realistically in terms of sorting performance you would probably just use a quick sort on the times, or since they are at the same distance the rate of falling. However if your formula is proportional to the mass, you'd just skip all that and sort the mass.

0 votes
answered Jan 21, 2010 by jader-dias

In a fictious "gravitational computer" this kind of sorting would take O(1) to be solved. But with the real computers like we know it, your lateral thought wouldn't help.

0 votes
answered Jan 21, 2010 by steve314

General-purpose sorting is provably at best O(n log n). To improve on this, you have to know something about the data other than just how to compare values.

If the values are all numbers, radix sorting gives O(n) assuming you have upper and lower bounds for the numbers. This approach can be extended to handle any number - and ultimately, all data in a computer is represented using numbers. For example you can radix-sort strings by, in each pass, sorting by one character as if it were a digit.

Unfortunately, handling variable sizes of data means making a variable number of passes through the radix sort. You end up with O(n log m) where m is the largest value (since k bits gives you a value of up to (2^k)-1 for unsigned, half that for signed). For example, if you are sorting the integers from 0 to m-1, well - you've actually got O(n log n) again.

Translating one problem to another can be a very good approach, but sometimes it's just adding another layer of complexity and overhead.

0 votes
answered Jan 21, 2010 by mpen

Ignoring all the flaws that everyone else has mentioned, essentially this boils down to an O(n^2) algorithm, not O(n). You'd have to iterate over all the "spheres", find the "heaviest" or "biggest" one, and then push it onto a list, then rinse and repeat. i.e., to find the one that hits the ground first, you have to iterate over the whole list, if it's the last one, it'd take O(n) time, the second would could take O(n-1), etc... but worse than that, you have to perform a bunch of mathematical operations each time just to calculate some useless "time" value when you could have sorted on the value you were interested in in the first place.

0 votes
answered Jan 22, 2010 by amrinder-arora

I love the idea! It is clever. While yes what others are saying is in general correct, that the O(n log n) bound is a provably lower bound on the sorting problem in general, we need to keep in mind that that lower bound is true only for comparison based models. What you are proposing here is a different model altogether, it does deserve further thought.

Also, as James and Matrix have pointed out, the heavier one doesn't travel faster than the lighter one, you obviously need to adapt the model to something where the heavier object (number) would indeed travel faster/further (or slower/less further) so that you can somehow distinguish the numbers (a centrifuge comes to mind as well).

Requires more thought, but your idea is sharp!

(Edit) Now looking at Enrique's Phys2D idea, I think it makes a whole lot more sense.

One thing I would suggest is to definitely ignore the efficiency aspect for now. (I know, I know that was the entire goal). It is a new model, and we first need to bridge the gap between the idea, and its implementation. Only then we can comprehend what the time complexity even means for this model.

0 votes
answered Jan 22, 2010 by enrique

Some weeks ago I was thinking about the same thing.

I thought to use Phys2D library to implement it. It may not be practical but just for fun. You could also assign negative weights to the objects to represent negative numbers. With phys2d library you can define the gravity so objects with negative weight will go to the roof and objects with positive weight will fall down .Then arrange all objects in the middle between the floor and roof with the same distance between floor and roof. Suppose you have a resulting array r[] of length=number of objects. Then every time an object touches the roof you add it at the beginning of the array r[0] and increment the counter, next time an object touches the roof you add it at r1, every time an object reaches the floor you add it at the end of the array r[r.length-1], next time you add it at r[r.length-2]. At the end objects that didn't move(stayed floating in the middle) can be added in the middle of the array(these objects are the 0's).

This is not efficient but can help you implement your idea.

Welcome to Q&A, where you can ask questions and receive answers from other members of the community.
Website Online Counter

...