First... if you are not comfortable overclocking or in-experienced with overclocking, then do not attempt it, or go to some of these tech forums that explain it in detail, like Guru3D.com's forums. Luckily however, alot of todays graphics card utilities 'do the work for you' and/or have safety features integrated into the utility that shuts down your system, or restores defaults, or both before any harm is caused.
I am not really completely aware of how the AMD/ATi graphics cards are setup, but I do know that "nVidia" graphics cards starting with the late model GTX 200 series to the current GTX 500 series (and all in between) are fairly simple to overclock. All it takes is a utility like "eVGA Precision v2.0.3", "MSi AfterBurner v2.1.0", "Asus GPU Tweak v1.10" (my new favorite), "nVidia Inspector v1.95.5", "Gainward ExperTool v7.20 (nVidia) / Gainward ExperTool v4.2 (AMD/ATi)", "Palit vTune v7.20 (nVidia) / Palit vTune v4.20 (AMD/ATi)" ... there are some others, but these will do just fine.
I know that both "ExperTool" and "Palit vTune" have seperate editions for nVidia and AMD/ATi, Palit's vTune is basically the utility as ExperTool with a different GUI. I do believe that the "Asus GPU Tweak v1.10" supports both nVidia and AMD/ATi cards, seeing that they sell both brands.
Once you install either eVGA Precision or MSi AfterBurner, you will need to launch them and go into the 'settings', then under 'general' place a tic in every box under "Safety Properties", if you graphics card supports "voltage adjustments & monitoring", this will enable it.
When OC'n your graphics card, more power is then required, this is where the voltage adjusting comes in. I do not know the "voltage range" for every nVidia card inbetween the GTX 200 series to current, but for example, my nVidia GTX 465 has a "voltage range" starting @ 987 mV and maximum of 1087 mV, this range is completely safe for the GTX 465. The trick is, figuring out the correct and perfect combination. If the utility you chose, does not have an 'auto overclocking feature', then very small mhz increments is the safest way of overclocking "anything".
Important things to remember >> First, the "voltage (mV)" adjustments only supplies more voltage to the GPU and Shader Cores, it does NOT boost the GDDR memory voltage, so do not go crazy with the gddr memory overclocking, infact.. for people that do not overclock, leave it be. If you do increase it, do it in the minimum increments that it allows you to adjust it at, then "stress test" the graphics card after each increase. Next thing to remember is, overclocking and increasing voltages = MORE HEAT, make sure you graphics card runs cool and has adequate airflow, if you have problems with your graphics card running hot @ default clocks, then don't go any further or figure out a way to get more air to it... also, once I get @ a 5% overclock increase (e.g: 607 mhz / 1215 mhz to 637 mhz / 1275 mhz) I manually set my 'fan speed' to 65% to 75% to keep temps in check. I have no issues with temps (30oC @ Idle), but I've got an Antec gaming case with alot of fans and airflow. Always monitor your temps when overclocking. Also, "Stress Test" your graphics card with every speed increase, there are several utilities available for this, MSi AfterBurner v2.1.0 includes a copy of "MSi Kumbustor v2.0.0" (aka: FurMark) and can stress test your graphics card, eVGA has a seperate utility you can download called "eVGA OC Scanner v1.7.2" that stress tests, artifact scanning, benchmarks, and logs details, you can also just download "FurMark v1.9.1", "TessMark 0.3.0" (nVidia GTX 400 series and up / AMD Radeon HD 5000 series and up only), "PhysX FluidMark 1.3.1" (nVidia), or you can also simply use any PC games built in benchmark / timedemo feature, like CS:Source's stress test, or run any type of graphical benchmark utility like 3Dmark Vantage / 3Dmark 11 / 3Dmark 2006, Unigine Heaven v2.5 benchmark, etc.. basically any utility, benchmark, or game that will put your card thru the loops and has the ability to run for a few minutes and/or repeat the benchmark/stress test loops. While you do this, monitor temps and look for artifacts, if the utility allows it, log them.
My GTX 465 has a stock gpu clock of 607 mhz / shader clock of 1215 mhz, with nVidia cards using these utilities, when you increase the "gpu clock", it also increases the "shader clock" speed as well. Currently I put my card back to 'stock clock speeds', if I do OC my gddr memory, I do it seperately from the gpu/shader increases, that way it helps to determine what causes the artifacts / freeze ups, here is an example of how I increase my graphics card's clock speeds >> I initially set the first increase by 2.5% to 3%. I also "manually set my fan speed to 100%" to insure coolest temps possible, or you can leave on auto if you wish.
** gpu @ 607 mhz / shader @ 1215 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 987 mV (stock)
** gpu @ 620 mhz / shader @ 1240 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 987 mV
** gpu @ 625 mhz / shader @ 1250 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 987 mV
** gpu @ 630 mhz / shader @ 1260 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 987 mV
** gpu @ 636 mhz / shader @ 1272 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 987 mV
** gpu @ 641 mhz / shader @ 1282 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 1000 mV
** gpu @ 646 mhz / shader @ 1292 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 1000 mV
** gpu @ 652 mhz / shader @ 1304 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 1000 mV
** gpu @ 657 mhz / shader @ 1314 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 1012 mV
** gpu @ 665 mhz / shader @ 1330 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 1012 mV
** gpu @ 665 mhz / shader @ 1330 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 1012 mV
** gpu @ 670 mhz / shader @ 1340 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 1025 mV
** gpu @ 675 mhz / shader @ 1350 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 1025 mV
** gpu @ 681 mhz / shader @ 1362 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 1025 mV
** gpu @ 686 mhz / shader @ 1372 mhz / gddr memory @ 1603 mhz / voltage (mV) @ 1037 mV
** Remember >> Stress Test after EVERY increase!
** maxed out @ 768 mhz (gpu) / 1536 mhz (shader) / voltage @ 1075 mV, I also did increase my gddr memory speed to, but only to 1640 mhz, pushing your memory to far will easily cause problems.
Now, if you start seeing artifacts, it is probably best to stop completely, if you CTD (crash to desktop), game freezes, or is choppy / game lag .. try and notice temps @ the time it happened, and make adjustments accordingly, you may have simply pushed the GPU/Shader clocks to high and did not supply enough voltage, a under powered graphics card WILL crash, freeze, or play crappy. Artifacts are a graphics card's way of saying, push me anymore and you will regret it.
That is the best way, or shall I say, the safe way to overclock your graphics card. After each initial increase, I run a "stress test", on the first few stress tests they are usually short ones, because most cards can easily handle those low overclock increases, one I get about 7.5% over the stock clock speeds, I increase the length of the stress test and make sure that my graphics card has reached the "maximum temperature" it is going to run at, once max temp is reached, let it run for about 2 minutes.
Now, if you have an "SLi setup" like I do, the best way to acheive maximum overclocks is by "individually" overclocking each card by itself. Problem with overclocking them both @ the same time is when you run into problems, like artifacts, crashing, freezing.. you are'nt really sure which card is not able to handle it. I individually OC'd both my GTX 465's before I even set them up in SLi, once I found each cards maximum stable overclock, I used the "lowest one" after I hooked them up in SLi. With SLi running, I can not get my 768 mhz / 1536 mhz clock speeds, 742 mhz (gpu) / 1484 mhz (shader) is the max oc I can get in SLi mode, which is plenty.
My maximum overclock I achieved is >> gpu @ 768 mhz / shader @ 1536 mhz / voltage @ 1062 mV. My idle temps increased from 30oC (86oF) to 35.5oC (95.9oF), my under load temps went from 58oC (136.4oF) to 67oC (152.6oF) which still well in the acceptable / tolerable range. This overclock increased my graphics cards performance level considerably too. 3Dmark 11 score on a single GTX 465 went from P4248 (stock clocks) to P5081 (overclocked @ 768 mhz), on 3Dmark 11 that is a considerable increase. My fps in Crysis 2 on average was 38 fps higher than normal fps I usually get. If you were getting only 30 fps in Crysis 2, then another 38 fps on top of that would be heaven for you. All performance increase were significant.
If you use eVGA Precision, MSi AfterBurner, or Asus GPU Tweak, make sure you set the options to display your FPS and temperatures in the "Onscreen Display", also putting the temps in the "Windows Taskbar" is a good idea, this allows you to always be able to monitor your temps.
MSi AfterBurner used to be my "number one choice", basically because of the options and ability to work with most of the nVidia graphics cards out, it does not have to be a "MSi" brand card. Recently I discovered the "Asus GPU Tweak", it is basically the samething as MSi AfterBurner / eVGA Precision, but it looks better to me and I like the options it has, though I have not gone through all of them yet.
Just remember... small increment increases, monitor temps, make sure you have adequate airflow, stress test after each overclock increase is applied.
** Also, on some nVidia cards, you can remove the "overclocking sync" for the GPU and Shader Cores, in otherwords you can adjust each one seperately, I am not positive if ALL of them are capable of this or not, but most nVidia cards are.
As for ATi overclocking and the features / options available, I can not help you there.. you will need to check out some of the forums like @ Guru3D.com, ExtremeOverclocking.com, TechPowerUp.com, etc..
This is just an alternative for some of you having poor performance problems and can not afford to upgrade ATM. I still would caution you though, SMALL STEPS ONLY!