We should give more points to conclusions that have higher perceived ethicality of their methods and results

Integrating ethical considerations into the evaluation of conclusions can significantly enhance the validity and acceptability of those conclusions. By allowing individuals to score the ethicality of various methods and results, we can create a framework where ethical considerations are systematically factored into the final assessment of each conclusion. This approach encourages consistency in reasoning and helps identify any logical fallacies or biases in judgment.

Using computational tools in this process enables a more objective and quantifiable assessment of ethicality. By assigning scores to philosophical questions or ethical considerations, a computer algorithm can process these inputs to determine the overall validity of conclusions based on both logical and ethical grounds. This methodological rigor ensures that ethicality is not merely a subjective or secondary consideration but a central criterion in the evaluation process.

This approach aligns with the broader objective of making decision-making more transparent, consistent, and ethically grounded. It reinforces that ethical considerations are not just abstract or philosophical concerns but integral to the practical assessment of ideas and policies.

Labels: Ethical Evaluation in Decision-Making, Consistency in Ethical Reasoning, Computational Ethics Assessment, Integration of Ethicality in Conclusions, Objective Ethical Scoring, Logical and Ethical Conclusion Assessment, Ethical Consensus in Argumentation, Ethical Considerations in Computational Analysis.

This equation could be more formally represented with the following equation and definitions for people who are good at math.

User Scores


PES=(EMA×10(EM×C1)+(EEA×10(EA))

Means Definitions

  • Perceived Ethics Score (PES): This score could be added directly to the conclusion score or used as a multiplier. The PES reflects the ethical assessment of a proposal's methods and results.
  • Ethical Means (EM): This is the score, ranging from 1 to 10, assigned by an individual to assess how ethical the means or methods of a proposal are.
  • Ethical Means Asked (EMA): This represents the count of individuals who have rated the ethicality of a proposal's means or methods.
  • Normalization Factor (e.g., 10):  Used to normalize scores to a scale of 0 to 1, where, for example, an average score of 8 translates to 0.8 or 80% validity. This aids in making the evaluation process more intuitive.
  • Constant 1 (C1): A score to alter the equation based on the performance of arguments that the Means are more important than the ends. 
Ends Definitions
  • Ethical Ends (EE)The individual ethicality score assigned to the ends or results of a proposal on a scale of 1 to 10.
  • Ethical Ends Asked (EEA):  The number of respondents rated the ethicality of a proposal's ends or results.

User Justification

Of course, the primary method of ranking ethics is with the ReasonRank algorithm (a modified version of Google's PageRank Algorithm that counts reasons instead of links but gives reason scores based on their supporting and opposing sub-arguments).

To do this, we will simply sum the scores of arguments that agree that a belief or action is ethical and subtract the scores of arguments that are not ethical. Of course, we must group similar ways of saying the same thing to prevent double-counting arguments said slightly differently. And, like everything else promoted by the Idea Stock Exchange, we must use linkage scores between the argument and the ethic (in this case) to measure the degree to which it should be said that if the argument were true, it would necessarily strengthen the ethic, or in other words, a percentage score to multiply to the argument, indicating the degree to which it is accurately linked to strengthen or weaken the ethic. This way, the same argument can have different linkage scores to beliefs and ethics. This way, if we weaken the argument or evidence, it can automatically weaken all the conclusions built on that evidence or argument.

Implementing ReasonRank for Ethical Evaluations

The Idea Stock Exchange advocates for using the ReasonRank algorithm to evaluate the ethicality of beliefs and actions. This approach, inspired by Google's PageRank Algorithm, prioritizes the quality and relevance of arguments in determining ethical scores. The process involves:
  1. Summation of Argument Scores:

    • Calculate the ethicality score by summing the scores of arguments that support the ethical nature of a belief or action and subtracting the scores of arguments against its ethicality.
  2. Grouping Similar Arguments:

    • To avoid redundancy and ensure accuracy, group arguments that express similar ideas, preventing the double counting of slightly varied arguments.
  3. Using Linkage Scores:

    • Apply linkage scores between arguments and the ethical aspect in question. These scores quantify how strongly an argument, if true, would support or challenge the ethical nature of the belief or action.
  4. Differentiating Linkage Scores:

    • Recognize that the same argument can have varying linkage scores when related to different beliefs or ethical considerations. This distinction allows for a nuanced understanding of how arguments contribute to different aspects of an issue.
  5. Dynamic Adjustment of Scores:

    • Ensure that any changes in the strength or validity of an argument or piece of evidence lead to automatic adjustments in all conclusions or ethical evaluations that rely on them.

This structured approach enables a more systematic and transparent assessment of ethics, aligning closely with the Idea Stock Exchange's goal of fostering well-founded and logical discourse. By carefully evaluating arguments and their relevance to ethical considerations, this method ensures that ethical evaluations are grounded in rational analysis and robust evidence.

 

The process of evaluating ethical considerations in proposals, particularly those involving explicit actions, can benefit from a more nuanced approach. Let's refine the existing system to better handle the complexities of ethical arguments related to both methods and results. We will focus on integrating the concept of 'Linkage Score' and the use of 'n' to signify the distance of sub-arguments from the primary conclusion:

  1. Definition of Variables:

    • n: Represents the number of 'steps' or levels removed a sub-argument is from the primary conclusion.
    • AAEM(n,i)/n: Arguments that Agree with the proposal's Ethical Methods. 'i' denotes individual reasons to agree. For instance, AAEM(1,1) to AAEM(1,5) represent five distinct reasons at the first level. The division by 'n' scales the contribution of these reasons according to their distance from the main conclusion.
    • ADEM(n,j)/n: Arguments that Disagree with the proposal's Ethical Methods. 'j' is similar to 'i' but for reasons to disagree. The effect of these reasons is subtracted from the total score, and the division by 'n' again scales their impact.
  2. Normalization and Scoring:

    • The total score is normalized by the sum of reasons to agree and disagree, ensuring the Conclusion Score (CS) reflects a percentage of agreement. The CS can range between -100% and +100% (or -1 and +1).
  3. Application Example:

    • Consider a policy proposal like Barack Obama's suggestion to raise taxes for families earning over $250,000. This proposal not only has explicit actions but also implicit results, each subject to ethical scrutiny. Ethical debates might encompass broader questions about national income tax ethics, progressive tax systems, or specifics like cost-of-living adjustments and family size considerations.
  4. Ethical Argument Tagging:

    • To add depth to our analysis, we categorize arguments as specifically addressing either the ethics of methods or results. This tagging helps in systematically organizing and weighing arguments based on their ethical implications.
  5. Complexity Acknowledgement:

    • This refined approach recognizes the inherent complexity in policy proposals, especially those with unstated results. It enables a comprehensive ethical evaluation, accounting for the multi-faceted 



Example: The end does not justify the means

John Stuart Mill, an influential liberal thinker of the 19th
century and a teacher of 
utilitarianism, albeit his teachings
 are a bit different from Jeremy Bentham's philosophy

Reasons to agree

  1. When you live in a society with laws, the ends (your goals) do not justify illegal means (or ways of accomplishing those goals).
  2. You are a hypocrite if you rely on the law to protect you, but you think you can break the law to accomplish your vision of the greater good.
  3. We need to examine both the ends and the means of our actions
  4. From a practical standpoint, if everyone thought the end justified the means, then the world would be a much worse place, because an extremist view of the ends justifying the means would allow you to kill those who disagreed with you. This would result in a lot of war, and murder. 
  5. A lot of people have justified their actions by saying that the end justifies the means. 
  6. God will not require us to do evil, to defeat evil
  7. People who say that the ends justify the means, cause more problems, trying to fix them, than if they would have just stayed out of it. Its better to live and let life. 
  8. Just because an abortion may result in good things for the mother, and even society as a whole, doesn't mean that it is alright. You can't say that taking a life is ever justified.
  9. The rightness or wrongness of one's conduct is derived from the character of the behaviour rather than the outcomes of the conduct.
Reasons to disagree
  1. If killing is wrong, would you have killed Hitler, if you knew it would have saved millions of lives? The ends may justify the means, if in the long run it helps more people than it hurts. I would have killed Hitler. 
  2. Sometimes the end justifies the means and sometimes it doesn't. 
  3. The end does justify the means when the good guys are doing the justification. 
  4. You can accomplish good by doing evil. 
  5. The consequences of one's conduct are the ultimate basis for any judgment about the rightness of that conduct.

We can use algebra to represent each term, and make it more formal mathematical, with the below formula and explanation of each term:
Ranking this conclusion by the ratio of reasons to agree vs. disagree (please add your reason to agree or disagree)

  • n: Number of "steps" the current arguments is removed from conclusion
  • A(n,i)/n: When n=1 we are looking at arguments that are used directly to support or oppose a conclusion. The 2nd subscript is "i". This is used to indicate that we total all the reasons to agree. So when n=1, we could have 5 "i's" indicating there are 5 reasons to agree. These would be labeled A(1,1), A(1,2), A(1,3), A(1,4), and A(1,5). N on the bottom indicates that reasons to agree with reasons to agree only contribute ½ a point to the overall conclusion. Thus reasons to agree with reasons to agree with reasons to agree would only contribute 1/3 of a point, and so on.
  • D(n,j)/n Ds are reasons to disagree, and work the same as As but the number of reasons to disagree, are subtracted from the conclusion score. Therefore, if you have more reasons to disagree, you will have a negative score. "J" is used, just to indicate that each reason is independent of the other.
  • The denominator is the total number of reasons to agree or disagree. This normalizes the equation, resulting the conclusion score (CS) representing the total percentage of reasons that agree. The conclusion score will range between -100% and 100% (or -1 and +1)
  • Jeremy Bentham, best known
    for his advocacy of utilitarianism
  • L: Linkage Score. The above equation would work very well, if people submitted arguments that they honestly felt supported or opposed conclusions. We could probably find informal ways of making this work, similar to how Wikipedia trusts people, and has a team of editors to ensure quality. However, we could also introduce formal ways to discourage people from using bad logic. For instance, people could submit that the "grass is green" as a reason to support the conclusion that we should legalize drugs. The belief that the grass is green, will have some good reasons to support it, and may have a high score. At first, to avoid this problem, I would just have editors remove bad faith arguments. But a formalized process would be to have for each argument a linkage score, between -1 and +1 that gets multiplied by the argument's score that represents the percentage of that argument's points that should be given to the conclusions points. See LinkageScore for more

Conclusion Score = [(5/1)xL - (4/1)xL]/(5+4)] (because I don't have this working yet with linkage scores lets assume L=1 for each argument) = (5-4)/9 = 11% valid. This might not sound good, but looking at the math you can see that values will range between -100% and +100%

If the ends don't justify the means than how can this be morally right



Promoting good linkages between assumptions, arguments, and conclusions

There will be slightly different forms for evaluating the different types of argument, because specific questions can be tailored to promote better quality depending on the type of argument being made.

For instance, a photo submitted as a reason to agree or disagree might have different issues that need to be addressed related to exaggeration of political cartoons, or appeal to emotion.

Linkage Scores


The following equation could be used to add more points to valid linkages between assumptions, arguments, and conclusions:


·         LVn:      Number of times someone has verified the logic of a an assumption and conclusion relationship
·         n:         Number of times a verification is removed from a conclusion. A verification of a logical argument that supports a conclusion is an n=1 relationship. A verification of an argument that supports an argument that supports a conclusion is an n=2 relationship
·         LRn:      Logic Reviewer’s score.  See my discussion on giving people scores based on the performance of arguments they support or oppose.

Administrators



Until we have algorithms that can automatically promote better arguments (by rewarding good behaviors, punishing bad behaviors, and removing spam, and trolls) we may need administrators.

There are a number of ways of finding administrators. We could draw from the field of conflict resolution and dispute mediation. For instance we could offer training and give tests for skills that have been proven to resolve conflicts. There is a whole field of conflict resolution, which already has standards of training for good moderators.

For specific arguments, we could give slightly more weight to opinions by “certifiable experts” in that field. For each person who asserts they are an expert we could have an algorithm to determine how many extra points we would give their vote. I propose the following equation and list of definitions:



·         PRn:      Number of professors who remember or recommend a student.
·         PAn:      Number of professors who were asked to recommend a student. The database would have a form for sending a recommendation. It would have a list of known professors at a university that it would send the request to.
·         C:         Constant. This is needed because if you ask 1 teacher, and they recommend you, then we still are not 100% sure that you went to the school, or were a good student. The constant results in a situation where getting two of two recommendations would be better than getting 1 of 1, even though they are both 100%.
·         VESn:    Verifier’s expertise Score. A teacher’s level of expertise would be obtained by a similar equation, with their peers being the verifiers for each area of study.
·         RSn:      Recommender’s score. This multiple would allow teachers to weigh their recommendation, perhaps on a scale of 0 to 1.
·         RSn With a line over it:            This is the average score given out by a given teacher
·         SRn:      Number of fellow students who remember or recommend a student
·         SRn:      Number of fellow students who were asked to recommend a student. Similar to above, the database would have a form for sending a recommendation.
·         sn:        Score on a test designed to determine proficiency
·         Snbar:   Average
·         GPA:    Grade point average