Many of these are unanswerable because we just don't have sufficient technical info yet.
What we can say for definite about megatexture is that it's a storage and bandwidth problem. Storage capacity is higher than it was in 2011 (but people still whinged mightily about 50gb for Wolf TNO) but an uncompressed build of Rage was still 1tb. That's with full 32-bit RGBA source data; an informed guess would say that a DXT compressed build is in the order of 125 to 250 gb, and the JPEG XR build that got released was in the order of 20gb. And that's with low resolution texturing - even doubling the resolution in each dimension (textures are 2D) would result in quadrupling the storage requirement, and that's before we factor in more geometry, bigger and more varied maps, etc.
Bandwidth is a bigger problem. Texture data must be read off disk, and the more data you need to read the slower that's going to be. That then needs to be sent to the GPU, with a transcoding stage (assuming that JPEG XR is retained) either before or after. Again, the more data, the slower all of this is going to be. And bandwidth hasn't been increasing as fast as processing power: you can't put 10 pounds of sh!t in a 9-pound bag.
With megatexture you're simply not going to get high resolution up-close no matter what, unless you have huge GPU memory capacity and a small enough data set that you can just pre-load everything and keep it permanently resident. The options are either for tricks like Rage's "texture detail", or some form of hybrid approach.