There are several theoretical limits to computing speed such as:
The speed of light: As far as modern physics can determine information can not propagate faster than the velocity of light. This causes a certain time delay with every computational step.
The amount of time for each step: The theoretical smallest increment of time is a Planck unit which means there is an upper limit to cycles per second.
Landauer's principle: The lowest possible theoretical amount of energy consumed by a computational operation. Circuits can be damaged by excessive heat and energy used creates heat so the less energy consumed by each computer operation the more computing operations that can be performed.
You mean to how much it could theoretically increase? How powerful processors can be depends on how small they can be made. With enough reduction in size quantum mechanical phenomena can begin to be a problem. Modern processors are made with photolithography. That means using laser beams to etch lines on silicon. With small enough transistors you might get errors from electrons randomly tunneling through despite the fact that it should be have enough energy to do that. This phenomenon is called quantum tunneling. Limitations causing by wavelength of light being too large in comparison to the required line width have already been worked around with various clever tricks. What I'm saying is that the original Moore's law, that is the doubling of the number of transistors in a given area on a silicon wafer every two years or so has begun to run into physical hard limits quite some time ago and has been slowing down.
The current methods of creating powerful processors are probably not the end of the line, however. There are many theoretical possibilities including optical transistors and using graphene (carbon molecules forming a certain type of lattice) as the substrate in which to build transistors.