1080p has 1080 vertical lines and
1920 horizontal lines. 720p has 1280 horizontal lines.
And yet 1280x1080 is both a supported backbuffer resolution and a used one, and they still call it 1080p!
I'm... lost. It doesn't introduce any more pixels and therefore it doesn't increase the resolution at all nor does it change the size of the outputted images (not like it could; the entire TV screen is already taken up by lower resolution images) because it has nothing to do with the complete size of upscaled images, but it results in bigger pixels? How do you fit larger pixels of the same amount as that of 720p on a screen with no more space on it?
It does increase the resolution, it does this by making the existing pixels "bigger" - or rather, by making what was one pixel now multiple pixels. If you resize an image to twice its original size, for example, each pixel will now be 4 - from
X
to
XX
XX
Double the "size", the same information. Filtering would then go through the whole image and try its best to remove the "blockiness" this creates.
Your TV can only display one resolution - its native resolution. However, it has a hardware scaler in it that can up or maybe downscale video you pipe into it. If you send 720p at a 1080p TV, it upscales it and displays it. If you send 1080p at a 720p TV, it (I believe) downscales it to 720p.
So however you do it, footage is being scaled to your TV's native resolution. Whether you do it on the console and send the TV 1080p, or do it on the TV and send the TV 720p, it gets output at the same resolution.
Why do the consoles do it, then? Because upscaling is easy - filtering that to remove artifacts is less easy, and a lot of TVs don't bother. Upscaling it before you send it to the TV ensures a consistent image across different models of TV.