Content
Thread
Forum
Date
Quote by funkbass369
Yeah it was gallium not germanium, my mistake. So Ga is 3+ and Se is 2- right? So then its Ga2Se3. Because you'd need to balance the charges out by multiplying both by 2 to give 6+ or 6-. Thats still not the answer that was given in the book. Its Ga2Se. I'm not seeing how whats the answer.


Ga is most typically found in the Ga(III) oxidation state, less often in Ga(I) and Ga(II). It would be odd for them to have you rationalize the Ga(I) or Ga(II) products, though they do exist! I suspect you might have a typo in your answers (both C and D are the same).
Quote by funkbass369
I still don't understand the second one. Ge has a charge of 4+ doesn't in? since it is in group 4A? and Then Se has a charge of 2-? So wouldn't the answer be GeSe2 instead of Ge2Se?


You were referring to gallium (III) in the original post. Not germanium.

Quote by funkbass369
I'm also having trouble with this problem:

carbon monoxide gas reacts with hydrogen gas at elevated temperatures to form methanol according to this equation:

CO(g)+2H2(g)<--->Ch3OH(g)

When 0.40 mol of CO and .30 mol of H2 are allowed to reach equilibrium in a 1.0 L container, 0.060 mol of CH3COH are formed. What is the value of Kc?

What I did was:

Kc=[CH3OH]/[CO]^2=[0.060]/[0.4][0.3]^2

I got Kc to be 1.7. My ACS exam book says that the answer is 5.4 though.


The concentrations of CO and H2 are no longer their initial values if some methanol was formed. Account for that. 0.06 moles of CO and 0.12 moles of H2 were consumed, so subtract these values from the initial ones. Otherwise you have the right idea.
Quote by funkbass369
I think I understand number 3. It has to do with the amount of protons in each atom right? They all have the same amount of electrons so more protons will pull harder on the electrons, making the atom smaller. correct? I still don't understand questions 1 and 2 at all.


When glucose is oxidized, the oxygen molecule is reduced. In this reaction, one of the O atoms will bind two hydrogens, the other will bind a carbon forming exactly two products: 6CO2 + 6H2O. This is a simplified explanation for a very complex biochemical process (but it's the answer you're going for at this level).

Number two deals with the charges on the ions. What is the most probable charge state of Ga? Of Se? The charge of Ga becomes the number of Se needed, the magnitude of the charge of Se becomes the number of Ga needed. Ga is most likely... 1+, 2+, or 3+? Se is most likely 1-, 2-, 3-?

Number 3 is all about the number of protons. When a series of isoelectronic systems is presented to you, and they ask about size... the system with the fewest protons is largest, the system with the most protons will be smallest. The explanation you provide above is the answer you're looking for at this level (there are simply deeper explanations, the answer doesn't change).
Quote by magnus_maximus
I don't drive.

Fuck da po-lice.


A certain Neil deGrasse Tyson meme comes to mind...

There's a ton of examples just Googling 'Bode diagrams and MATLAB' but if you need help beyond what the examples offer, I can hopefully help you out some.
Quote by pugachev
Hey guys,

I'm looking to get back into PC gaming so I dusted off my old build from 2009. As of right now, my video card is severely outdated and my heatsink is barely hanging on because the little push pins are broken. Its the stock heatsink. So, I need to replace some parts.

Now, if the i7 920 is still a pretty decent processor, I would like to keep that and overclock it to save some bucks, but I'll need a new heatsink. I don't have any experience OC'ing but I know you have to have a decent set of RAM to get anywhere, will these 6 GB be good enough to OC to around 3.6GHZ? And is 6GB enough these days for gaming?


There's plenty of guides out there to help you learn how to properly OC. Just make sure you have sufficient cooling and are not overly ambitious if you actually intend to use the computer -- a good target for the i7 920 for 24/7 OC is 3.30-3.40 GHz. You are in luck memory wise. Low voltage memory can usually be OC'd more than its standard voltage cousins. This gives you more wiggle room.

Make sure you have a damn good power supply, replace the thermal paste (keep in mind that most pastes claim to take quite a few cycles of heating/cooling to achieve peak performance), and replace the heatsink. I reiterate, make sure you have a damn good power supply.

Quote by pugachev

I will also need a new HDD, I'm thinking of a SSD, I've been using a 5400 RPM eMachines HDD since like 2003


It'll cost $300+ for a SSD with a capacity exceeding approx. 256 GB on Newegg. If you're comfortable with the smaller size, then by all means it is totally worth the performance if you pick out a decent model. It might be in your best interest to install two separate hard drives, one very nice 128 GB SSD paired with a high-capacity mechanical hard drive. Install things you commonly use on the SSD, put everything else on the mechanical drive. Assuming the board can support it!
Quote by guitarplaya322
I've got a pretty broad biology question that I have to write about and I'm having some trouble getting started with it. How can two nearly identical protein sequences (from two different species) code for such huge differences in proteins? If you have any articles or webpages that can help me get started with this question that would be greatly appreciated, as I really have no idea where to start with this.


I assumed you were asking for some talking points related to structural variability in proteins of similar function arising from small changes in the genetic sequence coding for it.


I would focus on subtle alterations in codons that would change key residues in the final protein product (e.g., those changes that increase or decrease the presence of proline in the protein, loss or gain of charged residues -- assume physiological conditions, changes that result in substitution of hydrophilic v. hydrophobic residues). Additionally, you could discuss thermal stability or pH stability and how similar enzymes vary between thermophiles, mesophiles, acidophilic, etc. organisms.

Good proteins to begin with belong to a group, the so-called universally conserved proteins.
Use a substitution of u = cos(x); du = - sin(x) dx. You should be able to recognize the solution of the resulting integral or use an integral table to find out if you don't happen to recognize a solution.
Quote by funkbass369
Consider the reaction: A + B → products
The following rate law was experimentally determined: rate = k[A]2
If the concentration of A doubles and B stays the same, the rate will?

What if both A and B are doubled?


1st question
==>
is held constant, therefore, the reaction rate proceeds as the pseudo-first order equation...

rate = k'[A] where k' accounts for both and the original rate constant.

Now, double the concentration of A... what's the new rate? [2A]^1 = 2*[A], so new_rate = 2*k'[A]

new_rate / old_rate = 2, so the rate is now twice that of the original.
==>

2nd question
==>
In this case, both concentrations change, so there's no simplifying it any further to a pseudo-lower order rate equation.

rate = k[A]^2

If we double A and B, the rate becomes new_rate = k[2*A][2*B]^2, which is of course equal to...

new_rate = (2*2^2)k[A]^2 = (2*4)k[A]^2 = 8*k[A]^2

new_rate / old_rate = 8, so the rate has increased 8-fold.
==>
Quote by funkbass369
Hydrogen chloride decomposes at high temperatures to hydrogen and chlorine gases:

2HCl(g)--->H2(g)+Cl2(g)

The reaction is second order with respect to [HCl]. BY what factor does the rate increase if the concentration of HCl is tripled?

I know the answer is nine. But i have no idea why. Instinctively, the answer was 6 to me because of the 2 in front of HCl. 2*3=6. But that is not correct. Can someone please explain?


The most basic definition of a second-order rxn is...

-d[A]/dt = k[A]^2

Triple the concentration of A, the rate of consumption of A increases 9-fold.
Quote by metal4eva_22
But P varies. Are you allowed to do that even though P is not constant?


An isothermal, quasi-static/reversible process yields...

w = -nRT*ln(Vb/Va) = -nRT*ln(p_init/p_final)

Vb is the final volume, Va the initial...
p_init is the initial pressure, p_final is the final pressure...

For an ideal gas both equations are equivalent.

You solve the integral substituting p for nRT/V, but can switch back and just use the pressures given by substituting Vb for nRT/p_final and Va for nRT/p_init. The nRT terms cancel, leaving the second form of the solution as shown above.
Quote by Neo Evil11
I dont know man. Life as a mathematician seems to be pretty boring to me. It is cool to know, but there is no feeling or creativity in it.


Computational origami...
Quote by metal4eva_22
What type of DNA mutation can cause either a macro (more than 1 bp changed) or a micro (1 bp changed) lesion?


Deletions... can be on the bp or chromosomal scale (i.e., deleting parts of or entire genes).
Quote by laid-to-waste
yea, but doesn't a black hole's mass accumulate, constantly getting more massive from what it sucks in?


The black hole is constantly losing tiny fractions of mass (Hawking radiation) and only accumulates mass if something falls in. Black holes can consume each other and form one big black hole -- you should read about this, it's flat out amazing!

Quote by laid-to-waste
also, how was the mass content of the universe measured and compared with the net gravitational force in the universe?


We've measured the density of the universe through fluctuations in the cosmic microwave background radiation. Approximating the size the universe, we have a volume. This gives us an approximate total mass of both normal, luminous matter and dark matter. We've estimated just how much luminous matter the universe contains (I don't know these details offhand, but I imagine it can't be too hard to find out from a Google search). We learn there's more mass than we expected!

How scientists infer the presence of additional mass from a galactic and not universal scale...
(you can read without a university subscription)
http://www.astro.caltech.edu/~george/ay20/eaa-darkmatter-obs.pdf
Quote by Dreadnought
I thought string theory rolled over, died, and was surpassed by m-theory?


M-theory is an extension of string theory... it's mathematically quite consistent and useful, but makes very few testable predictions. That's why people are losing faith in it. Years of 'we're close' and still we seem to be lost in extremely difficult mathematics. The things people think we can test are probably still years off technologically. People are growing impatient with the theory, but it's pretty much the best candidate for a unifying theory we have to date.

But like Neil deGrasse Tyson says... they're cheap. So they always find jobs.
Quote by Ur all $h1t
Nope, his popularity was never particularly high. Even in early days the Nazis struggled to gain support and never managed to gain a majority in the Reichstag, all of this despite murdering and banning the opposition along with a massive campaign of intimidation against anyone who spoke out against them. I would hazard a guess that even the Lib-Dems would do a better job of gaining support were they able to murder Tory leaders, ban the Labour party and beat the shit out of anyone who said anything bad about them.
In terms of efficiency, that whole thing was a mask. You look at the details of Nazi organization and the Bureaucracy which they set up and it becomes very clear. The way which they were running things was unsustainable; war was not an incidental thing that ruined their success, war was a fundamentally necessary thing that they required in order to allow and then prolong their apparent economic successes. To take a simple example, the reduction in unemployment is often cited as an example of Hitler’s economic success, but it was only possible due to a combination of massive rearmament and removing an entire class of people (Jews) from the workforce. Without the war and genocide that would have collapsed even sooner.


Their struggles in the elections are definitely true -- they had ~30% of the seats in 1932. However, after Hitler came to power, he did enjoy a stint of increased popularity because he was able to stabilize the economy among other things (even if it was not sustainable). Before the war, his popularity was more-or-less a roller coaster. He would make very aggressive diplomatic moves, the people would fear war, and his popularity would drop... but nothing would happen and the people would give a nod of approval. His popularity probably peaked after the invasion of France, but plummeted throughout the rest of the war.

I'd be interested to see actual numbers though... I've only briefly Googled for a simple graph, with no luck.
Quote by jazz_rock_feel
Math isn't really all that important...


Linear algebra, period.

Linear algebra powers the Google search engine, quantum chemistry software, powerful indexing codes, the code behind graphics manipulating software, and a HUGE variety of other useful applications. Beyond basic maths, it is THE single most important tool in the programmer's arsenal.
Was it Kant who (and I'm paraphrasing) said that a true science stood on its use of mathematics?
Quote by PsiGuy60
OK Pit, I need your help.
Does anyone here currently use Linux, and if yes, can they talk me out of installing Ubuntu (whether in favor of another distribution or just not considering it altogether)? I've been running it via a virtual machine for a while and I think I can work with it nicely. I'm currently studying Information Technology (ie programming / system administration) and am being taught how to do this all on Linux via that same virtual machine.

I should add that I don't give three sh*ts about gaming on the computer I want to run Linux on - I have another computer for that exact purpose.


Use Linux Mint. Basically Ubuntu before it messed itself up with bloat. Mint is currently the most popular distro according to DistroWatch and has been for months now (coincided with Ubuntu getting worse, imagine that!).
Materials most definitely. Plenty of room to explore what you're interested in there as well. Synthesis, instrumental work, theoretical work. It's not that the other fields have no opportunities, just that the majority of jobs are in the materials sector. Government positions (I would have to imagine Scotland is no different from the US here) are often grounded in materials and energy. I'd say medchem is probably the only oddball here -- everything else can at least get your foot in the door to most materials labs.

I'm in computational chemistry and am already looking to gear my thesis towards studying semi-/superconducting materials. So I've been reading literature on the side related to how people study these materials using quantum chemistry. You don't get read much if people don't care...
Quote by Eggmond
Graduated today! Whoop! I start my PhD next week. Any other PhD students got any tips or pointers?


If the school offers free food... you take it.

Nothing brightens your day quite like a free slice of pizza or a couple cookies. Also, you might begin to find these amusing.
Maybe it's about time we created the pamphlet "It's only a theory," and chucked textbooks on relativity, quantum mechanics, evolution, and computer science into people's lives. Mail one to Kent Hovind. He's got some free time.
Even if you don't have a ton of things you feel are not easily replaced on your computer, I'd suggest getting an account on one of the numerous cloud-based storage sites. You can get a small amount of memory for absolutely no monetary cost to you, just the time to sign up! If things haven't changed in the past year or so, Amazon Cloud may still offer the best GB for free deal (5GB at no cost). I use Amazon Cloud to backup my source codes and Latex document sources. Pretty straightforward, no internet browser limitations (e.g., on Google Docs/Drive you can't upload entire folders if you don't use Chrome). In addition, I have a 2TB external to backup everything... but the most important things go on Amazon too!

Dropbox gives 25 GB for free (for 2 years) if you own an HTC phone and click through a few tutorials.
Quote by Jmoarguitar
So Microsoft have figured out that my windows 7 isn't legit (It is in a way, but i took it from my friends HP Laptop), and they've basically bullied me into submission to purchase windows. My only question is, am i gonna lose everything i have downloaded? Including my game saves (actually doing well in Dayz, don't wanna ruin it ).


Just backup your hard drive (your docs, saves from games, other important files), install Windows, and you will be fine. If you installed Windows before backing up... then you would have an issue.
Quote by Angus_Junior35
I don't need AV software? That's interesting, why is that?


Combination of built-in security and the fact that it is not widely used. Open source code also means anyone can look at it to spot problem areas. In addition, downloaded files are not always given executable privileges by default, and so will not execute programs you do not give explicit permission to execute (file extensions do not dictate executable status like they do in Windows). You can use one if you want, but it's not necessary in the least bit.
Quote by grm893
Hi everyone,
I'm wondering if i can get any help with this vector problem.



I need to know how to find the Y-component of the 100N force. I have the answer which is100cos(45°sin (60°, but i have no idea how to get it. Any help would be much appreciated. Thanks


The way to break this into 2-2D problems is to project the 100N vector into the xy-plane using the angle, 45 degrees. Similarly, you could choose to project the vector into the yz-plane with the 60 degree angle. This projection would be observed then as the hypotenuse of a right-triangle you can use for the two-dimensional problem. Think of the projection as the 'shadow' the 3D vector makes on a 2D surface.

Ignoring dot products, projection of a vector onto an axis (or in this case, a plane) is defined as,

xy-component = 100N * cos(45)
yz-component = 100N * cos(30), note cos(30) is equivalent to sin(60).

As I said, the magnitude of the vector you compute with the projection immediately above should be seen to be equivalent to the hypotenuse of a right-triangle in either the xy- or yz-planes. Apply soh/cah/toa one more time to then compute the y-component,

y-component (from xy-plane) = 100N * cos(45) * sin(60)
y-component (from yz-plane) = 100N * cos(30) * sin(45), where I highlighted the projected magnitude (the hypotenuse) in red.

Please note that there are a few combinations of angles you can take to achieve the same results. Vector projection is what the others were referring to when they were saying to remove the x- or z-components and reduce the problem to a basic 2D trig problem.
If you have Adobe Acrobat (not Reader), convert your image to a PDF, then select to remove the watermark in Acrobat. I'd imagine you could get access through a local university... or other less... legal means. If you've got access to say Ubuntu, xpdf should work much the same (and it would be free).

Just Google how to convert JPEG images to PDF format. Again, if you happen to have some Linux distro, this can be done with imagemagick via the command, convert foo.jpg foo.pdf.

I've also heard of people using Google Docs to reverse engineer the compiled PDF file to its source, then you just delete the watermark source, and re-compile to a PDF. Can you do that with a JPEG source? Hmmm...
You might find these interesting... (*offered as explanations for entanglement*)

Non-locality in QM to explain entanglement...
Hidden variables to explain entanglement...

Yes, it's Wiki. However, Arxiv has many legitimate articles on quantum physics.
Quote by liampje
I need to understand what is inversion symmetry, to start recognizing wether molecules are polar or not.
But as far as I've seen wikipedia and youtube vids, I don't get a ****.
Can someone explain me just how to recognize the symmetry, in some way I can understand it?


For most molecules, knowledge of electronegativities and a little geometry is all you need.

However, you are curious to learn a little from the field of group theory, so... here's the group theory story:

The way most people are taught to search for an inversion center is to find a point (usually a central atom, but may be a region between atoms) through which you can drag each atom, and move it to the same position on the opposite side. If the resulting molecule is completely indistinguishable from the one you start with, then you have an inversion center.

Let's use an example of diatomic molecules. Inversion in a diatomic molecule would see you drag the atoms through the midpoint of the bond, to the other side. The molecule will only have, say, a carbon atom at position A and B if both atoms were carbon to begin with, otherwise... HF for example would see the H migrate from position A to B, and the F from position B to A. This molecule would be considered to have no inversion center because the final molecule is the mirror image of the original and not the original molecule. Not sure if you need to know this much, but C2 would belong to the point group D_infinity h, while HF would belong to C_infinity v -- if you looked at the character tables for these point groups, you would notice that C_infinity v lacks the inversion operation, i, while D_infinity h does have this.

--- EDIT: Sorry, I had started writing about one thing, then changed my train of thought part way through, just fixing it up a bit.

The same basic principle applies to any level of complexity. However, you will begin to recognize it better as you do more and more examples.

So, let's take water as an example (consider only the nuclei). If an inversion center is present, you would likely find it along a line passing between the H's and through the O. Now pick a point along this line, and pass each of the atoms through it. Do you have an O in the exact same spot you did in the original image... what about the H's? Like HF, water has no inversion center.

What about benzene (planar)? The inversion center sits in the middle of the ring. Pass each carbon through this point, to the other side. The carbon located at (using clock positions) 12 o'clock, moves to the 6 o'clock position and so forth with each carbon. Do the same with the hydrogens. Interesting! We have a carbon in all 6 positions we had a carbon at in the original molecule. Same applies to the hydrogens. Benzene has an inversion center.

Just keep in mind, if the molecule is not planar (let's say planar would be the xy-plane, like a 2D graph), then passing through the inversion center would flip you from the +z to -z (and vice versa). It's a common mistake to drag the atom through a mirror plane (which includes the inversion center) but not through the actual inversion center. Using the example of a graph (inversion center is origin let's say), the correct inversion of the 3D point (1,1,1) would be to drag it through (0,0,0), and continue to the point (-1,-1,-1). The mistake I mentioned would give the result (-1,-1,+1). The mirror plane would reflect the x and y coordinates, but not the z.

Want to practice with a non-planar example? Remove the double bonds from benzene, and you have cyclohexane. You now have two hydrogens per carbon and you have multiple conformations. You should find that this molecule also has an inversion center (center of the ring) when drawn in the typical 'chair' conformation.
You could just call it GNUL... or gnul. Don't even have to reach for the shift key then.
Quote by jazz_rock_feel
Logging in as root is almost always a terrible idea. Sudo is there for a very very good reason. Step number one in any Linux command line tutorial is always "You're a shitty person if you log in as root, don't do it nublet."


That it is.

Root is wonderful if you know what you are doing. On my personal laptop where I keep only temporary files and data, I feel more open to it. However, on clusters where many people have access, and where many people store days, weeks, months, or even years worth of work, being too reckless with root (especially logging in as root, as my idiot lab partner is accustomed to doing) leads to quite a bit of trouble.

On the workstation, I avoid it as much as possible, because it is with root access that massive amounts of data can potentially be (and have been) lost. Even something as simple as shutting down the computer after some operation can go awry (as was told above). Losing a few torrents or a few text files is frustrating, but not the end of the world (and really, what Linux user hasn't done something similar).

Losing your Ph.D advisers current work will put you in a body bag at the bottom of the local river. I'd like to avoid that, if at all possible.
Quote by justinb904
Oh, I see what you mean now. At least it's stuff you can re-download.


Don't fear root, embrace it as your friend but always respect it's power.
I find it a bit funny when people are skittish to use root or super-user privileges. I generally let them though because those people are probably the ones that don't know what they're doing and shouldn't be using it.

I've done a sudo rm -rf / a few time to wreck a system just for fun. After a proper backup of course.

edit: just realized my user title is relevant to the conversation.


While I do tend to shy away from root use when an alternative solution is available, I'm most weary of people coming onto a terminal and logging in as root. More than one person has access to each of our systems, so we try to discourage root use as much as possible. The story above is a good indication why. I'm not sure I fear root, so much as I fear losing hundreds of gigs of data again.


Quote by jazz_rock_feel
Root is not evil, root is a large part of what makes Linux brilliant. If you're mentally incapable of using it, then it can be deadly. I often wonder how people do ridiculous things, like do they just read a command, have no idea what it does and then decide to run it? "Hmmm, sudo rm -rf /... Seems legit."

I get inadvertently writing over a file or something like that, but I can't understand running something that just dummies the system without having any idea what it would do.


My lab buddy claimed exhaustion. Though I've no idea how he decided removing gcc + all dependencies with the package manager was a good idea. He wanted to install an older gcc... I then showed him the webpage explaining how to install multiple gcc versions. C'est la vie.
Quote by LRCGUITAR
Having deleted 50GB of downloads, I think it's safe to say I won't automatically trust guides on the internet when they tell me to run things as root...


root is the key to bad things in Linux. My lab buddy knows this now I think... he cost our group hundreds of gigabytes of data and set a couple people back a week or so trying to recover what we could. Oh... and he didn't offer to help. Guilty command here, the legendary rm -r * as root, in the / directory (he must have been trolling us or something, NO ONE does this having a legitimate excuse).

This is the same guy who managed to wander into the package manager, click to remove gcc, and then... click to remove ALL of its dependencies. OS wouldn't boot, luckily didn't cost us any data, but it fried my 2TB external to pull off everything and then put it back when I had reinstalled. He refuses to pay me back.

root is the true root of all computer evil, I avoid it if I can. Sometimes just waiting for something to finish will cost you less time in the end, should the script you gave root access to cause any issues.
*Strolls in, trips on rock, something falls from pocket to the ground*

Probably not USB/Linux related...

... should you change your mind.
How is the pursuit of knowledge (say as a scientist) not selfless? Am I hoarding these things I learn, or writing papers so that others may also learn what I have? Are scientists recognized for 'how smart they are' or for their contributions to the community? I'd say it is a purely humbling experience to be a scientist, to add to a body of work many others have... but, there are some people who become scientists and take on a bit of an inflated ego.
Quote by betelgeusex
It can't be both?


Exactly my question.

Quote by qaz923
To the guy above me.

I admit that I am woefully ignorant to scientific matters. And while that does not justify my belief it simply puts a bigger role for the divine. It's not that I think God programs computers but more along the lines of th possibility for humans o create computers is there because of the world that we occupy. God to me is like the keeper of the natural laws of the universe.


You may develop a greater appreciation for the world if you chose to learn more about it. Curiosity is a gift, divine or not.
Quote by qaz923
It's not singularly my own thought, don't make this out to be me as the lone hold out. I just feel as though scientific evidence does not explain the universe that makes sense to me. What good (to me) is an explanation that does not make sense to me? It isn't very useful.


"It's difficult" is NEVER a valid reason to dismiss something. It does not make sense to you because you have not attempted to study it. Are you familiar with computer programming? Perhaps semiconductors? Let's assume not, for sake of argument. Would you then deny the role programming languages and these materials play in our technology?

Do me a favor. Turn on the [analog] TV. Find a static-y channel or station. That static is (in small part) the cosmic microwave background radiation, a remnant of the Big Bang. Now do me another favor. Research the CMBR, learn something about it. Feel free to post what you learn.

Quote by qaz923
What caused the big bang? The big bang occurred because of a collision of particles (am I right on this?) therefore there was a certain rate of speed that these particles must have been moving. rate=distance/time. If we acknowledge that there is an age of the universe then we acknowledge that there is a time now. We can subtract this time until we get to zero, at which point there is no rate of the universe. There must then be an actor that acts upon the universe to set it into motion.


This sounds like a wonderful opportunity for you to do some investigating for yourself.

Quote by WhiskeyFace
Glad to see how busy this thread is lately.


As am I.
Quote by jazz_rock_feel
Okay, so hopefully last calculus question ever...
Find the area of the region that lies under the graph of f(x)= x between x = 0 and x = 2 by taking the limit of the sum of approximating rectangles whose heights are the values of the
function at the right hand end point of each subinterval.

So, I'm not too too bothered with an in depth understanding of how to do this sort of question as I only have to answer one of them for my entire course, but any resources for a step by step on how to solve this? I know basic integrals and stuff, but this sort of question uses a lot of notation that I've never seen, like i's and Sigmas and crazy shit.


You approximate the area under the curve by adding up the areas of N-total rectangles drawn so that either a.) the left-side hits the curve, b.) the right-side, or c.) the midpoint of the rectangle contacts the curve. Sigma would represent a summation over N rectangles with a 'counter' i, similar to programming in a for-loop:

for i in {1..100}, i would take on values 1, 2, 3, 4, 5, ..., 98, 99, N=100.

The limit would be taken as N approaches infinity and you sum the areas of an infinite number of rectangles, each with a height exactly equal to the y-value of the curve at some x-value.

It's probably the most basic of the numerical quadrature methods, but for large N your answer should converge to the exact solution, however, the error can swing in both directions depending on the method employed (hit with left-side, right-side, or midpoint) so you may overestimate the value or underestimate it.

http://en.wikipedia.org/wiki/Rectangle_method
http://en.wikipedia.org/wiki/Riemann_sum

It's easy and convenient to use equally wide rectangles, but they don't have to be the same width.

You would need to derive the expression for the limit as N approaches infinity (bottom of Riemann sum page will help). Or do it numerically with a finite number of rectangles.
Quote by jazz_rock_feel
C just seems like that one language that everyone hates, but just accept that it's ubiquitous and there's no way to get rid of it


C be fast, so C will stick, C in-line make Python quick!