When I started all power came from valve (tube) amps. They ran off lethally high voltages but gave potentially unlimited power at huge cost. The huge voltage swings had to go through a transformer before they were safe to put through a loudspeaker and the cost and weight of this, and the mains transformer made them back-breakingly heavy. Add in the reliability problems of the limited life of valves themselves and most of us were ready for something new. The one advantage was that when overloaded distortion started gradually and sounded quite sweet. There's still a place for tubes as guitar amps.
Original solid state amps ran off single supply rails and had a pair of output transistors that swung the speaker voltage between 0V and whatever the supply voltage was. Usually it was 70V which is what the best affordable transistor of the time (2N3055) could handle. This meant that most early trannies, like Charlie Watkins WEM, could only give 70W into 8ohms and 120 into 4ohms. To separate the speaker from the power supply you had to have a big output capacitor which cut the bass output and gave reliability problems.
Improvements in technology meant we soon moved to a split power supply with often an integrated circuit op-amp dealing with the input. Inputs became balanced inputs swinging + and - across the ground levels. Introduction of better transistors like the 2N3773, used in HH amps, and the TIP series increased voltages to 100V and doubled the power output.
Up to this stage the maximum output for an amp was limited by the supply rail voltage and high voltage power transistors were very expensive. People immediately realised however that you could power a speaker by connecting it across two amps. Provided the inputs were connected the opposite way round one amp would raise the speaker to the full rail voltage and when the signal reversed the other amp would do the same in the opposite direction. This is called bridging (after the wheatstone bridge you may have learned in school) You had doubled the voltage and because the power is proportional to the square of the voltage you had four times the power. Overnight car stereos went from 5W to 20W and SS amps went from 100W to 400W.
Since then power transistors have largely been replaced with more reliable FET's which can be made to run reliably at much higher voltages and the technical limits on power have been pushed right back. Amplifier watts have never been so cheap and the next step has already been taken. The little digital amps such as the Mark Bass herald a new era. Expect to be swamped with new offerings of lightweight, dirt cheap, ultra reliable kilowatt amps over the next few years as soon as the financial crisis is over.
One advantage of having this extra power is that trannie amps sound awful when overloaded. As soon as you hit the supply voltage they just stop amplifying and sound awful, turning your lovely mix of sine waves into nasty square waves, so some extra power is a great idea, no overload means cleaner sound and more reliability (for the amp) but there is a downside.
Imagine a world where overnight we developed a car engine that was cheap and gave four times the power. The boy racers would all want them and there would be a race for more power. Suddenly they would be looking pityingly at anyone with less than 800 horsepower and 0-60mph in less than 3.5secs. It would be dangerous to go out on the road with them and we would all "need" the extra power "just to get out of trouble in an emergency". The speed limits would stay the same because of the limitations of the squishy bit behind the wheel.
This is where we are in amplifier land at the moment. As an engineer I think in dB not watts. The maximum sound level you should be exposed to for two hours in the states is 100dB or it was in 1990. The same is true in Europe. In a recent study at "a well known music festival in England" the sound levels for the bass player and the drummer were just above this level (101dB and 104dB). In the UK planning laws means that bars and theatres often have to be fitted with electronic devices which cut the mains if the sound goes above certain levels. Effectively we have sonic speed cameras.
Now to produce an average sound level of 100db at 1 metre my very ordinary Peavey speaker needs just 1W. That's right one! If I want this level at 2metres I need 4W and at4 metres I need 16W. My lead is only 4m when you allow for the bit that dangles to the ground and my guitarist uses a 4m lead. Even if for some strange reason I wanted to boost my bottom frequencies by 10dB compared with the rest of the frequency range I'd still only need 150W. If I use my 2x12 then I only need 75W. How much overhead does anyone need?
If your onstage levels are above this you will sound sh-t. At these levels permanent hearing damage starts to cut in and your brain acts to protect your ears by dampening down and dispersing as much acoustic energy as it can. The first thing you lose will be the speech frequencies that do the most damage (because this is where your ears are most sensitive). This means you lose the ability to really hear the other members of the band so you lose cohesion as a band. More importantly the singer will not hear their own voice properly and their pitch won't be as accurate. If you want to be loud because you are playing metal or you just attract giant crowds or for whatever reason then use the PA and keep the highest sound levels well away from the stage.
We live in a golden age for amplifiers. The price per watt is at an all time low, reliability is almost complete (I rarely get power amps to fix) they weigh nothing and even the heavy and expensive mains transformer isn't needed with switch mode power supplies. My pub band carries more power than The Who or Pink Floyd. Just mind your ears, with power comes responsibility.