|
Thread: Morals VS Technology... | This thread is pages long: 1 2 3 4 5 · «PREV / NEXT» |
|
Salamandre
Admirable
Omnipresent Hero
Wog refugee
|
posted May 24, 2011 08:55 AM |
|
|
Well, give to ALL USA people 100% health insurance and then see how rich it remains. + superb social aids, as they do here (not that I agree or such, but that's it)
France: 168 milliards euros for 60 millions citizens: health care cost annually. Multiply by 5 then convert to dollars to see how much would cost for USA.
____________
Era II mods and utilities
|
|
JoonasTo
Responsible
Undefeatable Hero
What if Elvin was female?
|
posted May 24, 2011 09:42 AM |
|
|
That's net income I take?
Because there's no way French people are paid that low if us Finns get twice that.
____________
DON'T BE A NOOB, JOIN A.D.V.E.N.T.U.R.E.
|
|
Salamandre
Admirable
Omnipresent Hero
Wog refugee
|
posted May 24, 2011 10:01 AM |
|
|
|
JoonasTo
Responsible
Undefeatable Hero
What if Elvin was female?
|
posted May 24, 2011 10:07 AM |
|
|
So is that net income?
Or gross income?
If that's the net income it's not so bad in a country with high taxes. I got 1800 a month for changing tires. 23% taxes off that. So aroiund 1500 a month.
____________
DON'T BE A NOOB, JOIN A.D.V.E.N.T.U.R.E.
|
|
Salamandre
Admirable
Omnipresent Hero
Wog refugee
|
posted May 24, 2011 10:09 AM |
|
|
I don't know, I am working in the private, so things are different. Maybe Fauch knows how it goes. All I know is that my colleagues teachers from the public sector are almost homeless, starting at 1200 euros, gross.
____________
Era II mods and utilities
|
|
JoonasTo
Responsible
Undefeatable Hero
What if Elvin was female?
|
posted May 24, 2011 10:10 AM |
|
|
Where the hell did you pull off your 1300 then?
That link had 1700 net per month. Time to revise your contract?
____________
DON'T BE A NOOB, JOIN A.D.V.E.N.T.U.R.E.
|
|
Duke_Falcon
Disgraceful
Supreme Hero
|
posted May 24, 2011 01:41 PM |
|
|
Technology.
Why?
What will ethic help you if you get cancer or something? Nothing.
Current state of medicals developed mainly in '39-45. And frankly the nazi physicist and surgeons made this unbelieveable development in medical treating.
Experimenting on humans is necessary if you want to achieve things that could help for humans. Experimenting on rats or mouses? There are certain biological differences so after the rodents you need human subjects to... So here we are again!
Ethic won't help you if you die, remember this, while some experiment what have been made on human subjects may (80%) actually do, help...
____________
|
|
Fauch
Responsible
Undefeatable Hero
|
posted May 24, 2011 03:06 PM |
|
|
I don't know. I guess the average is probably between 1000 and 2000€
most people I know have jobs with very low qualification, they don't make more than 1500€ per months for sure. though, they can get close, but that requires more than just 35h of work.
actually, 1200 seems to be about the minimum wage for 35h, and with all the unenployment, which means it is easy to replace workers, many companies can afford to pay just the minimum.
|
|
Elodin
Promising
Legendary Hero
Free Thinker
|
posted May 24, 2011 03:07 PM |
|
|
Quote: I don't see how health care is the mark of Marxism. Are you suggesting that Europe is still marxist?
Secondly, I don't understand your logic: you are pro a law allowing to sell organs, but on the other side you call this action a "stupid thing".
Health care carried out through stealing money from one person to provide health care for another is Marxist.
My logic is the government should not have the power to prevent you from doing stupid stuff. Drinking too much is stupid but the government should not be allowed to limit how much you can drink. I consider face lifts and tattoos to be stupid. But the government should not be allowed to prevent you from getting those. Ect.
|
|
Zenofex
Responsible
Legendary Hero
Kreegan-atheist
|
posted May 24, 2011 05:02 PM |
|
|
A breakthrough in the medical science - people fall ill because they are stupid.
|
|
mvassilev
Responsible
Undefeatable Hero
|
posted May 24, 2011 05:03 PM |
|
|
Quote: Well, give to ALL USA people 100% health insurance and then see how rich it remains.
Sounds like an excellent argument against government health care.
____________
Eccentric Opinion
|
|
Corribus
Hero of Order
The Abyss Staring Back at You
|
posted May 24, 2011 09:35 PM |
|
|
Guys, you've completely killed Smithey's thread. Let's please bring it back on track. The original topic was morals versus technology. You wanna talk socialism and healthcare, please do it somewhere else.
Yeah, that's me practicing up my moderating skillz.
Now, to remind everyone, the original question was:
Quote: Ethics and morality usually clash with technological advancements, should we do whatever is necessary for the greater good or should we stay restricted within the borders of "morality" ?
This is ground we've trod before, but not, I don't think, in a general sense. Which on the one hand is a good thing, but on the other hand makes it hard to have a focused discussion. In any case, since the question is general, I'll speak to it generally.
The OP touches on the crux of the problem and then mvass flushes it out when he writes:
Quote: In that case, we shouldn't do anything "no matter what the cost is". We have to weigh the costs and the benefits.
What mvass endorses is an analytical, cognitive approach to decision-making. Which of course has little to do with morals and, to some degree, ethics. Morals are NOT grounded in data-based, logical risk analyses. Morals are based to large degree on emotions, feelings, cultural or religious factors, and other affective heuristics which bias the way a person perceives risks and benefits. Emotional decisions based on morals are often reflexive, almost instantaneous, based more on these perceptions than on actual data or facts. Analytical decisions take more time, as costs and benefits must be enumerated and weighed against each other. When given time, people often make different choices than they would if asked to render a spur-of-the-moment decision.
Morals offer people a sort of physiological blue-print so that they can make quick decisions without having to spend a lot of time making cost-benefit analyses for each set of circumstances they encounter. It makes a sort of evolutionary sense if you think about it - there's a selective advantage to being able to make quick decisions. Morals are sort of a way for people to say to themselves, in advance, "OK, if I encountered this kind of situation, I should do this," and "Ok, if I encountered that kind of situation, I should do that." These are usually tailored to maximize the likelihood that any decision being made will benefit the self or closely-associated individuals, even society at large (because usually what is good for society is good for the self).
Imagine for a moment if people didn't have morals. You are walking down the street and see someone breaking into a house. If you were limited to analytical, cost-benefit analyses to guide your actions, you would be required to gather data to make some sort of decision. You'd need to find out who the person is, whether they own the house, why they are trying to get in, etc., etc. This sort of data acquisition process takes an enormous amount of time, not to mention you'd have to make some decision about when you have enough data to proceed. Supposing you had such time, you might acquire data and conclude that the person does not own the house and is trying to get inside in order to take something that isn't his. Then you project the risks and benefits of his doing this, and of your intercession. In the end, you might reasonably conclude that while such an action does not do directly harm you, if such an action would be allowed to proceed, the community would be less safe because you might be the next victim of this thief. They might take your food or your medicine - which would compromise your survival. Worse, you might not be able to forward-think your way to this downstream conclusion, and erroneously let the person go on breaking into the house, judging that the action doesn't harm you in any way. It's hard to put yourself in these shoes, of course, because we all already known that stealing is wrong. Just try to think outside the box for a moment, though - imagine all the computations you'd have to make to arrive at this sort of abstract reasoning every time you encountered a situation. By the time you reached a decision - even if you reached the right one - the thief would be long gone.
In the end, what matters in making a decision is not the consequences themselves, or how they arise, but rather whether the consequences are positive or negative. In other words, to make a good decision you don't necessarily need to know why a decision is good. You just need to know that it IS good. So you see, then, that if you had some sort of cheat-sheet to help you out, you could make the right decision (most of the time) quickly. We see someone breaking into a house and we automatically recognize that it looks suspicious and not moral. Our conditioning and experience tells us this instantly - and we can call the police or intercede before harm is done to society. We protect ourselves because of this capacity to act without having to weigh all the risks and benefits.
Of course, decisions based on morals can be wrong. Maybe the person owns the house and just got locked out. Well, that's sometimes the problems with morals. We DON'T get all the facts before making a decision, so sometimes the decisions are wrong. Morals aren't designed to be flawless. They're designed to maximize the chances of making decisions that benefit us.
So.
Back to the original post - why do morals often clash with technology? I think I've set the stage to answer that easily.
First, let's see what we mean by "clash". Morals as I've said are designed to make quick decisions based on broad, generic circumstances. They're been developed over long periods of time, even on evolutionary timescales. They're based on experience, meaning they're based on the past. They're designed to solve simple quandaries enountered in natural settings.
New technologies offer us the ability to do new things, to solve problems in new ways. If you think of morals as a sort of computer algorithm that takes an input, runs it through a simple program, and spits out a simple generic answer, perhaps you can see where the problem is. Your feeding a computer algorithm an input that it wasn't designed to handle. The computer algorithm doesn't really know what to do with this foreign input, so it either stalls, or spits back an answer that doesn't make any sense. Or different algorithms spit out different answers, and then people fight over whose algorithm is better. (Everyone has a slightly different moral algorithm, after all, and though most will spit out the same answer when it comes to something simple like a guy breaking into a house, you'll get all kinds of different answers with something more complex like cloning a human being.
Actually, let's take the issue of human cloning as a case study. To determine whether human cloning is something that should be pursued, risk managers and ethicists need to do an extensive risk analysis to determine what the benefits are and what the potential consequences to society might be. That in itself is a difficult process because it's hard to predict what the downstream benefits and risks of a new technology like human cloning are going to be, but scientists and philosophers and ethicists are usually good analytical thinkers who can separate their emotional, moral response to a new technology from their fact-gathering and analysis. (Unfortunately, that's not usually the case with politicians, who ultimately make the decisions - but that's another story.) Laypeople, on the other hand, are not experts, and even if they could take the time to gather the incredibly complex facts behind genetics and sociology, they wouldn't have the expertise to process them and make a well-informed decision. So they rely on - again - emotions and those moral algorithms to make choices and decisions. The problem is that they're trying to cram genetic engineering into a system that was designed to process simple dilemmas like murder, rape, theft, which have fairly obvious effects on the safety of society - and it's no surprise really that people whose value systems would pretty much unanimously agree that murder is not moral have a diverse range of emotional reactions to new technologies like genetic engineering, nanotechnology, nuclear energy, and etc. Especially when we're conditioned to distrust things we don't understand or don't think are "natural".
Anyway, that's all I'll say about it for now. I'll point out that I've dedicated a significant amount of time to learning why people make decisions about nanotechnology - I've read dozens of papers in the psychological and sociological literature on this topic, and have a paper of my own under review at the moment - and so I feel fairly well an expert on the subject now. Questions and comments are therefore welcome.
____________
I'm sick of following my dreams. I'm just going to ask them where they're goin', and hook up with them later. -Mitch Hedberg
|
|
mvassilev
Responsible
Undefeatable Hero
|
posted May 24, 2011 10:37 PM |
|
|
Despite that being an excellent post, you misrepresent morality and morals. Morality is a framework for decision-making - it tells us which ends are good and which are bad. It's not an "emergency guide". When you analyze something and come up with the result, you take it and apply the morality test, with questions like, "Is it worth it?", "Does what we get outweigh what we lose?", "Are anyone's rights being violated?", and the like. When you say "morals" it sounds more like you're talking about moral intuitions - which are really not the same thing, and the questions of why/how science conflicts with them and how/why it conflicts with morality are completely different.
____________
Eccentric Opinion
|
|
Corribus
Hero of Order
The Abyss Staring Back at You
|
posted May 24, 2011 11:00 PM |
|
|
Where in my post did I say morals were anything other than a framework for decision making? In addition, any deliberative process of risk analysis IGNORES morals. You might incorporate morals into a decision making process AFTER deliberative risk analysis, but most people do make decisions based solely on emotional response and quick moral judgement. I see no refutation of anything I wrote at all - just a declaration that I'm wrong in some unspecified way and then a restatement of what I wrote.
By the way, you should be careful to use precise language. There is no absolute good" or "bad". Nobody is fortunate teller for one thing - all we can deal with are predictions - and for another its impossible to deduce causality with certainty.
If you'd like me to point you toward some literature on affective decision making, I'd be happy to do so. You might want to start with the work of Paul Slovic.
|
|
mvassilev
Responsible
Undefeatable Hero
|
posted May 24, 2011 11:19 PM |
|
|
Now you're restating what I wrote. I didn't say risk analysis doesn't ignore morals, I said - these are my exact words - "When you analyze something and come up with the result, you take it and apply the morality test..."
Some people make decisions based on moral intuition. Others make them based on thought-out moral frameworks. What you're doing is you're misusing the term "morality" to refer to an unconscious/semi-conscious rule of thumb. Your whole paragraph that starts with "Imagine people didn't have morals..." would be more accurate if it said "Imagine people couldn't make quick decisions on questions of morality..." Not being able to decide quickly is not the same thing as not having morals. You said that what matters when making a decision is whether the consequences are positive or negative. Morality is how you decide which one they are.
Quote: There is no absolute good" or "bad".
I didn't say there was - even though I think there is.
____________
Eccentric Opinion
|
|
Fauch
Responsible
Undefeatable Hero
|
posted May 24, 2011 11:31 PM |
|
Edited by Fauch at 23:38, 24 May 2011.
|
well one problem as you said it is that morality is based on the past, so while the world is always changing and situations are always new, you come with an old solution.
I think moral answer and emotional answer are something completely different and the moral may clash with emotions.
emotions come spontaneously, so if morality was emotional, there would be no point in teaching it.
|
|
Corribus
Hero of Order
The Abyss Staring Back at You
|
posted May 24, 2011 11:42 PM |
|
|
@Mvass
Sounds like six of one and half dozen of the other to me. You need to try again because I don't understand the hairs you're trying to split.
|
|
Fauch
Responsible
Undefeatable Hero
|
posted May 24, 2011 11:47 PM |
|
|
I think he is saying that morals is conditioning, something you do without even thinking about because you are deeply convinced it is the right thing to do.
and morality is when you open a laws book or a bible to check whether something is good or no?
|
|
Corribus
Hero of Order
The Abyss Staring Back at You
|
posted May 24, 2011 11:57 PM |
|
|
@Fauch
Quote: I think he is saying that morals is conditioning, something you do without even thinking about because you are deeply convinced it is the right thing to do.
In line with what I wrote.
Quote: and morality is when you open a laws book or a bible to check whether something is good or no?
Don't even know what that means. I don't think much distinction needs to be made between the two terms. Morals are what I said they are, and morality is acting in accordance with one's morals.
In any case I see no other compelling argument against what I wrote, or even a real point of contention for that matter. I can't really respond to an argument I don't understand.
|
|
mvassilev
Responsible
Undefeatable Hero
|
posted May 24, 2011 11:58 PM |
|
|
To explain it simply:
You do risk analysis, then decide whether the consequences are positive or negative. The standard by which you decide that is morality (or moral framework).
If you don't have time to do thorough risk analysis, you use your moral framework and your best understanding of what's going on to decide what to do.
moral framework + best understanding =/= moral framework.
____________
Eccentric Opinion
|
|
|
|