Se existe um algoritmo rodando no tempo para algum problema A, e alguém cria um algoritmo rodando no tempo, , onde , é considerado uma melhoria em relação ao algoritmo anterior?
Faz sentido, no contexto da ciência da computação teórica, apresentar esse algoritmo?
algorithms
lovw
fonte
fonte
Respostas:
Não, um algoritmo em execução no tempo , onde g ( n ) = o ( f ( n ) ) , não é necessariamente considerado uma melhoria. Por exemplo, suponha que f ( n ) = N e g ( n ) = 1 / n . Então O ( f ( n ) / g (O(f(n)/g(n)) g(n)=o(f(n)) f(n)=n g(n)=1/n é um período de tempo pior que O ( f ( n ) ) = O ( n ) .O(f(n)/g(n))=O(n2) O(f(n))=O(n)
Para melhorar um algoritmo rodando no tempo , você precisa criar um algoritmo rodando no tempo o ( f ( n ) ) , ou seja, no tempo g ( n ) para alguma função g ( n ) = o ( f ( n ) )f(n) o(f(n)) g(n) g(n)=o(f(n)) .
Se tudo que você sabe é que um algoritmo é executado no tempo , não está claro se um algoritmo que é executado no tempo O ( g ( n ) ) é uma melhoria, qualquer que seja f ( n ) , g ( n ) são. Isso ocorre porque O grande é apenas um limite superior no tempo de execução. Em vez disso, é comum considerar a pior complexidade do tempo e estimar como um Θ grande, e não como um O grande .O(f(n)) O(g(n)) f(n),g(n) Θ O
fonte
Remember thatO(...) notation is meant for analyzing how the task grows for different sizes of input, and specifically leaves out multiplicative factors, lower-order term, and constants.
Suponha que você tenha um algoritmo cujo tempo de execução real seja 1 n 2 + 2 n + 1 (supondo que você possa realmente contar as instruções e saber os horários exatos e assim por diante, o que é reconhecidamente uma enorme suposição nos sistemas modernos). Suponha que você crie um novo algoritmo que seja O ( n ) , mas o tempo de execução real é 1000 n + 5000 . Suponha também que você saiba que o software para usar esse algoritmo nunca verá um tamanho de problema de n > 10 .O(n2) 1n2+2n+1 O(n) 1000n+5000 n>10
So, which would you chose - theO(n) algorithm that's going to take 15000 units of time, or the O(n2) one that's only going to take 121 units? Now if your software evolves to handling problem sizes of n>100000 , which one would you pick? What would you do if your problem size varies greatly?
fonte
And sometimes, even theoretical computer scientists use “faster” the same way normal people do. For example, most implementations of String classes have Short String Optimization (also called Small String Optimization), even though it only speeds things up for short strings and is pure overhead for longer ones. As the input size gets larger and larger, the running time of a String operation with SSO is going to be higher by a small constant term, so by the definition I gave in the first paragraph, removing SSO from a String class makes it “faster.” In practice, though, most strings are small, so SSO makes most programs that use them faster, and most computer-science professors know better than to go around demanding that people only talk about orders of asymptotic time complexity.
fonte
There is not one unified definition of what a "faster algorithm" is. There is not a governing body which decides whether an algorithm is faster than another.
To point out why this is, I'd like to offer up two different scenarios which demonstrate this murky concept.
The first example is an algorithm which searches a linked list of unordered data. If I can do the same operation with an array, I have no change on the big Oh measure of performance. Both searches are O(n). If I just look at the big Oh values, I might say that I made no improvement at all. However, it is known that array lookups are faster than walking a linked list in the majority of cases, so one may decide that that made an algorithm "faster," even though the big Oh did not change.
If I may use the traditional example of programming a robot to make a PBJ sandwich, I can show what I mean another way. Consider just the point where one is opening the jar of peanut butter.
Versus
Even in the most academic theoretical setting I can think of, you'll find that people accept that the first algorithm is faster than the second, even though the big Oh notation results are the same.
By contrast, we can consider an algorithm to break RSA encryption. At the moment, it is perceived that this process is probably O(2^n), where n is the number of bits. Consider a new algorithm which runs n^100 faster This means my new process runs in O(2^n/n^100). However, in the world of cryptography, a polynomial speedup to an exponential algorithm is traditionally not thought of as a theoretical speed up at all. When doing security proofs, it's assumed that an attacker may discover one of these speed ups, and that it will have no effect.
So in one circumstance, we can change a O(n) to O(n), and call it faster. In a different circumstance, we can change a O(2^n) to O(2^n/n^100), and claim there was no meaningful speed up at all. This is why I say there is no one unified definition for a "faster algorithm." It is always contextually dependent.
fonte
I can't comment yet, but I feel like the current answers, while correct and informative, do not address part of this question. First, let us write an expression equivalent toA(n)∈O(f(n)) .
Now, let us assume we are talking about an arbitrarily increasing functiong(n) where lim supn→∞g(n)=∞ and let us create the function h(n)=f(n)g(n) .
We are given that the run-time of the "improved" algorithmA′(n) is in O(h(n)) . Suppose that the run-time of the original algorithm A(n) is also in O(h(n)) . This can be written as follows.
Using the rules of limits, we can also write:
Sincech<∞ , this can only be true if cf=0 .
The contrapositive statement is: Ifcf≠0 , then A(n)∉O(h(n)) .
In words,A′(n) is an "improvement" on A(n) under the additional conditions that A(n)∈Θ(f(n)) and g(n) is arbitrarily increasing.
Additionally, this should show why the statement thatA(n)∈O(f(n)) is not strong enough to draw a conclusion about whether A′(n) is an "improvement." In short, A(n) could already be in O(h(n)) .
fonte