Showing posts with label Truth Promotion Algorithm. Show all posts
Showing posts with label Truth Promotion Algorithm. Show all posts

Example: The end does not justify the means

John Stuart Mill, an influential liberal thinker of the 19th
century and a teacher of 
utilitarianism, albeit his teachings
 are a bit different from Jeremy Bentham's philosophy

Reasons to agree

  1. When you live in a society with laws, the ends (your goals) do not justify illegal means (or ways of accomplishing those goals).
  2. You are a hypocrite if you rely on the law to protect you, but you think you can break the law to accomplish your vision of the greater good.
  3. We need to examine both the ends and the means of our actions
  4. From a practical standpoint, if everyone thought the end justified the means, then the world would be a much worse place, because an extremist view of the ends justifying the means would allow you to kill those who disagreed with you. This would result in a lot of war, and murder. 
  5. A lot of people have justified their actions by saying that the end justifies the means. 
  6. God will not require us to do evil, to defeat evil
  7. People who say that the ends justify the means, cause more problems, trying to fix them, than if they would have just stayed out of it. Its better to live and let life. 
  8. Just because an abortion may result in good things for the mother, and even society as a whole, doesn't mean that it is alright. You can't say that taking a life is ever justified.
  9. The rightness or wrongness of one's conduct is derived from the character of the behaviour rather than the outcomes of the conduct.
Reasons to disagree
  1. If killing is wrong, would you have killed Hitler, if you knew it would have saved millions of lives? The ends may justify the means, if in the long run it helps more people than it hurts. I would have killed Hitler. 
  2. Sometimes the end justifies the means and sometimes it doesn't. 
  3. The end does justify the means when the good guys are doing the justification. 
  4. You can accomplish good by doing evil. 
  5. The consequences of one's conduct are the ultimate basis for any judgment about the rightness of that conduct.

We can use algebra to represent each term, and make it more formal mathematical, with the below formula and explanation of each term:
Ranking this conclusion by the ratio of reasons to agree vs. disagree (please add your reason to agree or disagree)

  • n: Number of "steps" the current arguments is removed from conclusion
  • A(n,i)/n: When n=1 we are looking at arguments that are used directly to support or oppose a conclusion. The 2nd subscript is "i". This is used to indicate that we total all the reasons to agree. So when n=1, we could have 5 "i's" indicating there are 5 reasons to agree. These would be labeled A(1,1), A(1,2), A(1,3), A(1,4), and A(1,5). N on the bottom indicates that reasons to agree with reasons to agree only contribute ½ a point to the overall conclusion. Thus reasons to agree with reasons to agree with reasons to agree would only contribute 1/3 of a point, and so on.
  • D(n,j)/n Ds are reasons to disagree, and work the same as As but the number of reasons to disagree, are subtracted from the conclusion score. Therefore, if you have more reasons to disagree, you will have a negative score. "J" is used, just to indicate that each reason is independent of the other.
  • The denominator is the total number of reasons to agree or disagree. This normalizes the equation, resulting the conclusion score (CS) representing the total percentage of reasons that agree. The conclusion score will range between -100% and 100% (or -1 and +1)
  • Jeremy Bentham, best known
    for his advocacy of utilitarianism
  • L: Linkage Score. The above equation would work very well, if people submitted arguments that they honestly felt supported or opposed conclusions. We could probably find informal ways of making this work, similar to how Wikipedia trusts people, and has a team of editors to ensure quality. However, we could also introduce formal ways to discourage people from using bad logic. For instance, people could submit that the "grass is green" as a reason to support the conclusion that we should legalize drugs. The belief that the grass is green, will have some good reasons to support it, and may have a high score. At first, to avoid this problem, I would just have editors remove bad faith arguments. But a formalized process would be to have for each argument a linkage score, between -1 and +1 that gets multiplied by the argument's score that represents the percentage of that argument's points that should be given to the conclusions points. See LinkageScore for more

Conclusion Score = [(5/1)xL - (4/1)xL]/(5+4)] (because I don't have this working yet with linkage scores lets assume L=1 for each argument) = (5-4)/9 = 11% valid. This might not sound good, but looking at the math you can see that values will range between -100% and +100%

If the ends don't justify the means than how can this be morally right



Promoting good linkages between assumptions, arguments, and conclusions

There will be slightly different forms for evaluating the different types of argument, because specific questions can be tailored to promote better quality depending on the type of argument being made.

For instance, a photo submitted as a reason to agree or disagree might have different issues that need to be addressed related to exaggeration of political cartoons, or appeal to emotion.

Linkage Scores


The following equation could be used to add more points to valid linkages between assumptions, arguments, and conclusions:


·         LVn:      Number of times someone has verified the logic of a an assumption and conclusion relationship
·         n:         Number of times a verification is removed from a conclusion. A verification of a logical argument that supports a conclusion is an n=1 relationship. A verification of an argument that supports an argument that supports a conclusion is an n=2 relationship
·         LRn:      Logic Reviewer’s score.  See my discussion on giving people scores based on the performance of arguments they support or oppose.

Administrators



Until we have algorithms that can automatically promote better arguments (by rewarding good behaviors, punishing bad behaviors, and removing spam, and trolls) we may need administrators.

There are a number of ways of finding administrators. We could draw from the field of conflict resolution and dispute mediation. For instance we could offer training and give tests for skills that have been proven to resolve conflicts. There is a whole field of conflict resolution, which already has standards of training for good moderators.

For specific arguments, we could give slightly more weight to opinions by “certifiable experts” in that field. For each person who asserts they are an expert we could have an algorithm to determine how many extra points we would give their vote. I propose the following equation and list of definitions:



·         PRn:      Number of professors who remember or recommend a student.
·         PAn:      Number of professors who were asked to recommend a student. The database would have a form for sending a recommendation. It would have a list of known professors at a university that it would send the request to.
·         C:         Constant. This is needed because if you ask 1 teacher, and they recommend you, then we still are not 100% sure that you went to the school, or were a good student. The constant results in a situation where getting two of two recommendations would be better than getting 1 of 1, even though they are both 100%.
·         VESn:    Verifier’s expertise Score. A teacher’s level of expertise would be obtained by a similar equation, with their peers being the verifiers for each area of study.
·         RSn:      Recommender’s score. This multiple would allow teachers to weigh their recommendation, perhaps on a scale of 0 to 1.
·         RSn With a line over it:            This is the average score given out by a given teacher
·         SRn:      Number of fellow students who remember or recommend a student
·         SRn:      Number of fellow students who were asked to recommend a student. Similar to above, the database would have a form for sending a recommendation.
·         sn:        Score on a test designed to determine proficiency
·         Snbar:   Average
·         GPA:    Grade point average

The main Algorithm

Abstract 

I propose that we build the SQL code that would facilitate an online forum. This forum would use a relational database to track reasons to agree and disagree with conclusions. It would also allow you to submit a belief as a reason to support another belief (see image 1 below): 


Figure 1: Arguments used to support other arguments

Arguments are currently made on websites, in books, and even in videos and songs. It would be powerful to outline all the arguments that agree or disagree with a conclusion and put them on the same page as seen below:



Figure 2: Arguments go from websites, books, songs, videos, into a relational database and are presented with their structure

Having the structure of how all these arguments are used to support each other, could allow us to automatically strengthen or weaken a conclusion's score based on the score of their assumptions.

The purpose of the Idea Stock Exchange is to find ways to give conclusions scores based on the quality and quantity of reasons to agree or disagree with them with an open sourced SQL database.
Pros and Cons are a tried and true method to evaluate a conclusion

Many people, including Thomas Jefferson and Benjamin Franklin advocated making a list of pros and cons, to help them make decisions. The assumption is that the quantity and quality of the reasons to agree or disagree with a proposed conclusion has some bearing as to underlining strength of that conclusion. I wholeheartedly agree. 

No one has yet harnessed the power of Pros and Cons in the information age. We can.

However, now that we have the internet, we can crowd source the brainstorming of reasons to agree or disagree with a conclusion.

The only trick is how do you evaluate the strength of each pro or con? Many people suggest putting the strongest pros or cons at the top of the list. Also, if we had enough time we might make a separate list FOR each pro or con.

For instance, FDR had to decide if we should join WWII or not. One pro might be that the German leaders were bad. There were many reasons to support this belief, and this belief was used to support another belief.

Not very many people have enough time to do a pro or con list for each pro or con. But on the internet we keep making the same arguments over and over again. For thousands of years we have been repeating the same arguments that Aristotle and Homer have made. Most of our arguments have been made thousands or millions of times. However no one has ever taken the time to put them into a database, and outline how they relate to each other. We can change this.

I propose that we find algorithms that attempt to promote good conclusions and arguments. This simplest and best method of scoring conclusions is to counting the number of reasons to agree, and subtracting the number of reasons that disagree. Because some arguments are better than other arguments, we should repeat this process for every argument until we reach verifiable data. The following equation represents this plan:

·         n = number of “steps” the current arguments is removed from conclusion



We can use algebra to represent each term, and make it look a little more mathematical, with the below formula:

·         n:                     Number of “steps” the current arguments is removed from conclusion
·         A(n,i)/n:             When n=1 we are looking at arguments that are used directly to support or oppose a conclusion. The 2ndsubscript is “i”. This is used to indicate that we total all the reasons to agree. So when n=1, we could have 5 “i’s” indicating there are 5 reasons to agree. These would be labeled A(1,1), A(1,2), A(1,3), A(1,4), and A(1,5). N on the bottom indicates that reasons to agree with reasons to agree only contribute ½ a point to the overall conclusion. Thus reasons to agree with reasons to agree with reasons to agree would only contribute 1/3 of a point, and so on. If we decide to make the bottom of the equation n x 2, then these would contribute 1/6 of a point. It is obvious that some of their score should contribute to the conclusion scores, because weakening an assumption should automatically weaken all the conclusions built on that assumption. We could continually update n to give reasonable result, or each website could use its own secret sauce. 
·         D(n,j)/n              Ds are reasons to disagree, and work the same as As but the number of reasons to disagree, are subtracted from the conclusion score. Therefore, if you have more reasons to disagree, you will have a negative score.  “J” is used, just to indicate that each reason is independent of the other.
·         The denominator is the total number of reasons to agree or disagree. This normalizes the equation, resulting the conclusion score (CS) representing the total percentage of reasons that agree. The conclusion score will range between -100% and 100% (or -1 and +1)

The above equation would work very well, if people submitted arguments that they honestly felt supported or opposed conclusions. We could probably find informal ways of making this work, similar to how Wikipedia trusts people, and has a team of editors to ensure quality. However, we could also introduce formal ways to discourage people from using bad logic.

For instance, people could submit that the “grass is green” as a reason to support the conclusion that we should legalize drugs. The belief that the grass is green, will have some good reasons to support it, and may have a high score. At first, to avoid this problem, I would just have editors remove bad faith arguments. But a formalized process would be to have for each argument a linkage score, between -1 and +1 that gets multiplied by the argument’s score that represents the percentage of that argument’s points that should be given to the conclusions points.

I believe the most elegant way to come up with a linkage score would be to just make a new argument, that “a” supports “b”, with all the normal reasons to agree and disagree. However, I also propose the percentage of up-votes compared to the percentage of down-votes and other good idea promoting algorithms below.

Also, without editors, you would run into the problem of duplication. If we had this at the time of the Gulf Wars, people could have been submitting the belief that Saddam Hussein was a bad person as a reason to support the belief that we should go to war. People would submit the belief that we don’t go to war with everyone who is bad, as a way of weakening the linkage between this conclusion and argument. But someone might also submit the belief that he was “evil”. How much is the world “evil” and “bad” the same thing? Is Evil just a worse kind of bad? These questions could be quantified, if for each argument, we brainstormed a list of “other ways of saying the same thing”. Of course we would use all of our algorithms to determine to what degree they are the same thing. If we determine that two items are 85% the same thing, then when both of them are used as reasons to support the same thing, then they would only count as 1.15x their two scores, not 2x.

Examples

We might be arguing the conclusion that “It was good for us to join WWII.” Someone may submit the argument that “Nazis were doing bad things” as a reason to support the conclusion about entering the war. The belief that Nazis were doing bad things might already have a score. Let’s suppose that this idea score has a high ranking of 99%. This might be awarded a linkage score of 90% (as a reason to support the conclusion that we should have gone to WWII).  In this situation it would contribute 0.495 points (0.99 X 0.5) to the conclusion score for the beliefs that “It was good for us to join WWII”. Someone else might submit a belief that “Nazis were submitting wide scale systematic genocide” as a reason to support the belief that “It was good for us to go to WWII”. Because we don’t go to war with every country that “does bad things”, we would assume that this linkage score would be higher, perhaps a 98%.

For example the belief that Nazi Germany leaders were evil, is a belief with many argument to support it. However it can also be used as an argument to support other conclusions, such as the belief that it was good of us to join WWII.


Assumptions
·         Reason Belief used to support another belief(For example the belief that Nazi Germany leaders were evil, is a belief with many argument to support it. However it can also be used as an argument to support other conclusions, such as the belief that it was good of us to join WWII).
·         Good Belief Good Reasons to Agree > Good Reasons to disagree
·         Bad Belief Good Reasons to Agree > Good Reasons to disagree
·         Great Belief Good Reasons to Agree >> Good Reasons to disagree
·         Terrible BeliefGood Reasons to Agree << Good Reasons to disagree


There are many things web designers can do to help people resolve their conflicts +4.16

Reasons to agree:
  1. It would help us move towards understanding if web forum designers rewarded those who can demonstrate that they understand those with whom they disagree with. 
    1. There are many ways discussion forum designers can reward those who demonstrate that they understand those whom they disagree with.
      1. Web-designers could test users ability to properly identify similar concepts, from multiple choice options.
        1. Perhaps people who have their comments evaluated could have special consideration in evaluating weather or not the person who disagreed got their statement right. 
      2. Maybe before you disagree with someone you have to put into your own words exactly which part you disagreed with. You could do this by highlighting or bolding the part that you disagree with. 
  2. Web designers would help online debate if they created web forums that allowed users to identify specifically which portions of text they agree and disagree with. 
    1. Not identifying exactly which portion you disagree with results in confusion.
    2. Psychologist could help out in this section. 
Score:
# of reasons to agree: +2
# of reasons to disagree: -0
# of reasons to agree with reasons to agree: +3/2+2/4+1/6=2.16
# of reasons to agree with reasons to disagree: -0
Total Idea Score: +4.16

Reframing Online Debates for Constructive Dialogues

It's essential to restructure online debates to ensure that reasons supporting and opposing a belief coexist on the same platform. True understanding and resolution in any debate come not from overlooking the counterarguments but from directly engaging with them.

Ignoring an opponent's perspectives and data is akin to navigating a debate with blinders on. It limits the depth of the discussion and often leads to an echo chamber effect, where one's own beliefs are amplified without challenge, stunting intellectual growth and understanding. 

Constructive debates require acknowledging and addressing the full spectrum of views, which is why having reasons to agree and disagree presented together is crucial. This approach fosters a more holistic and nuanced understanding of issues, allowing participants to weigh different viewpoints fairly and make more informed decisions.

By structuring online debates in this way, we encourage not just the exchange of ideas but the cultivation of respect, empathy, and a genuine quest for truth.

If we entered our beliefs and arguments into databases, there are many features of relational databases that could help us come to better conclusions


  1. If our beliefs and arguments were entered into a relational databases, we could: 
    1. tag arguments as either a reason to agree or disagree with a particular belief. This would be beneficial because: 
      1. We could post the results so that reasons to agree or disagree with a conclusion would be on the same webpage.
      2. It would be beneficial to have all the reasons to agree and disagree with a belief on the same page.  
    2. assign scores to arguments
    3. assign scores to beliefs, based on the score of the arguments for and against the beliefs
    4. assign scores to beliefs, based on other beliefs that are used to support or oppose them. For instance the belief that the middle class should get a tax break, has many reasons to agree or disagree with it, and it can also be used as a reasons to support or oppose other beliefs, like the belief that we should support politicians who agree or disagree with a middle class tax cut. 
    5. tag them with intelligent meta data, to allow computers to help organize the argument for us. 

We need to back up our beliefs with clear logic and well found reasoning



Reasons or arguments people use to agree:
  1. Evidence-free metaphysical speculations or politicized wish-fulfillment fantasies will destroy us.
    1. We can't just adopt socialism because it makes us feel good, without first knowing that it will work, and that it won't put our good freedom loving nice guys at a disadvantage in competition with non freedom loving dictators. 
  2. Bertrand Russell was right when he said. "It is undesirable to believe a proposition when there is no ground whatsoever for supposing it is true."
  3. When you make an assumption you make an ass out of you and me. 
  4.  If we don't use good logic to make our arguments, we will come to bad decisions. 
  5. If we want to survive as a species, we need to make good decisions. 
  6. Our beliefs affect our happiness
    1. If you want to enjoy your life, you should spend your time on rewarding activities. 
  7. Our beliefs affect our actions.
  8. Our beefs affect our personal success
    1. If believe it is important to not be seen as a a nerd, and we believe nerds are well educated, we will not want to be well educated. Your chances for success will be improved with education.

Our conclusions and reasons to coming to them are all tied together in complex nonlinear ways similar to a relationship database

  1. Our conclusions have many reasons to agree and disagree with them and each of these beliefs has many reasons to agree and disagree with them. As these arguments branch out and arguments multiply, it becomes too much for our brains to handle all at once.
  2. Assumptions are beliefs that are used to support other beliefs. If you change one assumption, it will change the strength of each conclusion that builds on that assumption. In a relational database you can say 5 people live together, then when you change one person's address, it can change all of their addresses. In a similar way, if we strengthen or weaken any assumption in a relational database, it will strengthen or weaken all of the conclusions that are based on these assumptions. Defining all these relationships is the only way we can ever make any progress at weighing all the data that we have.   

We should crowd source a database of things that people believe and arguments they use


  1. We need to back up our beliefs with good logic  Score: 9
  2. We can build a relational database that outlines our beliefs relatively cheaply 
  3. If we entered our beliefs and arguments into databases, there are many features of relational databases  that could help us come to better conclusions. 
  4. If we can sequence millions of lines of Human DNA, you would think that we could organize our thoughts and beliefs. 
  5. You need advanced scientific methods to sequence the human genome, but all you need is a database to outline the things people believe.
  6. If you use a relational database to associate arguments with the beliefs they support, you could design a scoring system  that analyze the validity people's arguments, and then the cumulative validity of their beliefs. 

A relational database is the best way of outlining our beliefs

Other good idea promoting algorithms: laws



I believe that we can count the number of laws that agree or disagree with a belief, as a way of measuring how much  society believes something is wrong.

For example every society believes that murder is wrong, and often punishes it with some sort of criminal justice program.

A way of quantifying this so that you can give scores to conclusions based on the quantity of laws that are said to support a belief (like murder is bad) and the quality of arguments that a law supports a certain belief about a behavior being bad, the relationship score between the belief and the law, the severity of punishment for breaking the law, and the relative number of laws that can be said to agree or disagree with the belief, or any of the supporting arguments, would be to make an equation and build it in software.

A way of counting all of this with a powerful algorithm could be expressed this way:


Or we could represent the math more simply by substituting algebra, with the following definitions:



Definitions:

·         LAn/LDn: Laws that are argued to agree or disagree or disagree with a conclusion
·         LAn+LDn::Number of laws submitted in this forum as reasons to agree or disagree with a conclusion. I’m just trying to find some way of normalizing this factor, or weigh it, so that it doesn’t carry too much or too little weight. Obviously, like any other factor on this forum, we could tweak multiplication factors, or allow users to tweak them.
·         LSn: Linkage Score: The linkage would become its own argument, with reasons to agree, and a score between -1 and 1. A negative score would be a law that actually makes a counter argument to the intended suggestion, 0 has no relation, and 1 fully supports the intended conclusion.
·         Psn: Punishment severity. For instance is the punishment a felony or a misdemeanor. How many years of prison are people typically punished.  


Examples: Is the Burqa so important that it should be required, or so bad it should be banned?

For example, the fact that almost all countries outlaw “murder of innocent adults” represents the amount of validity that most societies attribute to a belief. It may be rare that you would have laws that disagree and agree on non controversial topics. However, there are countries that ban and require women to wear Burqas. A way of measuring if mankind thinks it is wrong to wear the burqa would be to take the number of countries that ban them (France, etc) and subtract the number of countries that require them (Afghanistan, Saudia Arabia, etc).  Depending on what side of this is used to support, you would subtract or add the percentage of countries that ban it compared to the total number of countries that have laws on the use of an item.

Examples: Is shooting an intruder a good activity that helps protect law followers, or is it a bad activity that ends a life too soon
You could add the percentage of states that say it is wrong to shoot an intruder, as evidence to support the belief that is wrong to do this.

An open letter to Math teachers

I am writing you to ask for your assistance in promoting "good idea promoting algorithms" such as the following:

The above formula would work in an environment were you were able to submit reasons to agree or disagree with a belief, and then you could submit reasons to agree or disagree with those arguments. With this format it place you could count the reasons to agree and subtract the number of reasons to disagree, and then you could integrate the series of reasons to agree with reasons to agree.

You should use this equation because:
  1. It is unique. I have never seen someone use an algorithm in an attempt to promote good ideas. Math can become more interesting when kids see the variety of ways it can be applied. 
  2. Kids are idealistic, and often want to improve the world. Challenging them to try to come up with a good idea promoting algorithm can use this energy, to learn math.
  3. This simple that counts the reasons to agree with a conclusion, could change the wold, similar to how Google's web-link counting algorithm changed the world. When lots of people link to a website, Google assumes that website is a good one. Then when that good website links to another website, Google assumes the 2nd website is a good one. Similarly when you submit good reasons to support an argument, a smart web forum would also give points to the conclusions that are built on that assumption. 
  4. The more people make good idea promoting algorithms, the less stupid world we will live in.

Other Factors: Stuff, like movies, songs, experts, etc that agaree or disagree

Similar to how I say books can support or oppose different conclusions, movies (often documentaries) can support or oppose different conclusions. Rotten tomatoes gives scores to movies. All of this data could be imported, as well as the formal logical arguments that a movie actually attempts to support or oppose a belief.

L = Link score. When we submit beliefs as reasons to support other beliefs, and give higher scores to conclusions that have more reasons to agree with them, people will try to submit beliefs that don’t really support the conclusion. For instance someone might post the belief that the grass is green as a reason to believe the NY Giants will win the super bowl. The beliefs that the grass is green will receive a high score, but the “Link Score” as will be close to zero.

* As we work this out we may have to apply multiplication factors to not give too much or too little weight to a factor.

** Who has a “.edu” e-mail address from the philosophy department of an accredited university

Other Factors: Logic Professors



I had a logic professor. He was in the philosophy department, and he taught a course on logic. Every professor has a few philosophy teachers that teach formal logic.

I think if we tracked the number of logic professors that "certify" the logic of an argument and subtract the number of logic professors that "discount" the logic of an argument, we could use that data to promote ideas that have been more thoroughly examined, and supported.

This is the best equation I can come up with for adding points to a belief based on the number logic professors that support or oppose the logic used in an argument.

I would love your feedback!

Below is an explanation of each term.



  • NPA/D = Number of times a certified logic instructor has verified/discounted the logic of a reason to disagree
  • Summing “NPA or NPD” would mean that if a logic professor disagreed with a reasons to support your conclusion, that would take away ½ a point, because that action is twice removed.