Capitol Fax.com - Your Illinois News Radar » Question of the day
SUBSCRIBE to Capitol Fax      Advertise Here      About     Exclusive Subscriber Content     Updated Posts    Contact Rich Miller
CapitolFax.com
To subscribe to Capitol Fax, click here.
Question of the day

Tuesday, Sep 26, 2023 - Posted by Rich Miller

* The setup

But when it comes to political campaigns and politics, the misuse of artificial intelligence could threaten our very democracy.

“Deepfakes” use AI to create images, sound clips and videos that appear very real but are simply manufactured. They aren’t the Photoshop photos that swap out one person’s face for another in a photo, but technology that can take anyone’s likeness and voice and create virtually any video the creator wants.

A bipartisan group of senators has introduced the Protect Elections from Deceptive AI Act, which would ban the distribution of “materially deceptive” AI-generated political ads relating to federal candidates or certain issues that seek to influence a federal election or fundraise.

It’s a good start but doesn’t go far enough. AI has become easy to use and available to anyone, including state and local politicians and their staff.

Congress should require any political ad or politically related content that uses AI to be clearly labeled as being AI generated, whether they are deceptive or not.

* The Question: Should the Illinois legislature vote to require any political ad that uses AI to be clearly labeled as being AI generated? Take the poll and then explain your answer in comments, please.


       

57 Comments
  1. - Steve - Tuesday, Sep 26, 23 @ 12:28 pm:

    I voted yes. I heard the other day the late Christopher Hitchens speaking via AI about current politics. I don’t think that should be in political ads.


  2. - OneMan - Tuesday, Sep 26, 23 @ 12:28 pm:

    Voted yes, not sure how you would completely define ‘AI Generated’ and would this rule extend to stuff that was obviously faked (like JBs head on top of a bear to something).

    Would you describe it as AI or just computer-generated? It seems like a good idea, but I think the implementation is going to be a challenge.


  3. - Mr. Middleground - Tuesday, Sep 26, 23 @ 12:31 pm:

    This is an obvious first step. We need way more regulation around AI but any step in the right direction is a good thing.


  4. - Give Us Barabbas - Tuesday, Sep 26, 23 @ 12:32 pm:

    The temptation to use it in dark funded oppo is high. The bad guys will put it out anyway, as they do now with the cruder stuff, not caring if it gets swatted, because the damage has already been done seeding the lies out into the public consciousness.


  5. - John Morrison - Tuesday, Sep 26, 23 @ 12:33 pm:

    Absolutely. This is spiritually in line with our biometric privacy laws as well as good transparency practices.

    Voters need to know if what they are seeing/hearing is legitimate or not, and those in the videos need to be honestly represented.


  6. - Oswego Willy - Tuesday, Sep 26, 23 @ 12:34 pm:

    Yep.

    Forget everything to anything on this.

    Ads need to identify “actor portrayal” in commercials… at least to seek honesty to the ad.

    So…

    Voted Yes. Easy


  7. - Montrose - Tuesday, Sep 26, 23 @ 12:34 pm:

    Absolutely. And if you are hesitant to disclose the use of AI, then to me that’s a clear sign your goal was to deceive.


  8. - austinman - Tuesday, Sep 26, 23 @ 12:34 pm:

    I voted , yes so people can know its AI and let voters decide on if they want to believe the ad


  9. - TheInvisibleMan - Tuesday, Sep 26, 23 @ 12:37 pm:

    No.

    There’s just no way to define it in a way that would have a meaningful impact. Is using autocomplete when typing a caption considered ‘using AI’ under this proposal? Because it is using AI. How would labeling it as ‘AI generated’ provide any value.

    If someone can be deceived with AI, they can be deceived without it just as easily. AI isn’t some magical potion. The people consuming the ads are going to be exactly the same people no matter how you label the political ads. Read into that what you will.


  10. - Hank Saier - Tuesday, Sep 26, 23 @ 12:37 pm:

    Yes but we don’t enforce much now or maybe politics gets more attention than crime


  11. - Pot calling kettle - Tuesday, Sep 26, 23 @ 12:37 pm:

    Yes, more transparency in political speech is good. Of course, the bigger issue remains transparency in donations, most especially who donates to the dark money groups most likely to put up misleading ads.

    To the post, I would expect dark money groups to pop-up, run an AI ad that violates whatever rules are put in place, and the quickly disappear so there is no one to hold accountable.


  12. - Oswego Willy - Tuesday, Sep 26, 23 @ 12:38 pm:

    ===There’s just no way to define it===

    Is it altered or edited using AI?

    Pretty simple, yes or no.


  13. - RNUG - Tuesday, Sep 26, 23 @ 12:39 pm:

    Voted yes, but with reservations. AI content needs to be properly labeled, but also concerned about who will be making decisions to potentially flag or censor items.

    Quis custodiet ipsos custodes?


  14. - Oswego Willy - Tuesday, Sep 26, 23 @ 12:41 pm:

    I’ll leave it at this;

    Defending purposeful deception with AI for the good of a factual argument is an odd “truth” flex.

    Just sayin’


  15. - TheInvisibleMan - Tuesday, Sep 26, 23 @ 12:46 pm:

    “Defending purposeful deception”

    This perfectly proves my point.

    You’ve already equated using AI with being purposely deceptive before the race has even started. That’s your own baggage you are bringing into the argument as a defacto standard that can not be questioned.

    I’ll ask it again - how is using autocomplete being purposely deceptive. That’s AI.


  16. - Oswego Willy - Tuesday, Sep 26, 23 @ 12:48 pm:

    ===You’ve already equated using AI with being purposely deceptive===

    Oh.

    You think AI editing is going to be for positive ads?

    That’s way too naive.

    Disclose. It’s not up to me or anyone to make AI a positive thing.

    Likely, people perceive it negatively because, why… positive usage?


  17. - Oswego Willy - Tuesday, Sep 26, 23 @ 12:51 pm:

    ===autocomplete===

    Show me that ad, let me hear that radio ad.

    You do know what context we are discussing, no?

    It’s like an actor portraying a “doctor” telling me putting peanut butter on a blister will cure my bronchitis… “actor portrayal”… because no doctor will say it, but dress up an actor…

    “autocomplete”?


  18. - ArchPundit - Tuesday, Sep 26, 23 @ 12:51 pm:

    ===Would you describe it as AI or just computer-generated?

    That’s a good question and it might need to be altered footage or something like that instead of just saying AI


  19. - TheInvisibleMan - Tuesday, Sep 26, 23 @ 12:56 pm:

    “You think AI editing is going to be for positive ads?”

    Why not. It already is.

    AI is used to upscale old or damaged video recording to make it easier to see or provide a higher resolution. No deception involved.

    Again, you are bringing the automatic assumption of it always being negative as a justification.

    AI is a tool. Just like words are.

    Maybe we should label all ads using words? They can be used to deceive as well. We don’t do that, because words are a tool, deception is only one way to use that tool.

    “Parental Advisory” labels worked very well, just in the opposite way as the advocates intended it to. It became a badge to seek out, not to avoid.


  20. - Oswego Willy - Tuesday, Sep 26, 23 @ 12:56 pm:

    ===AI is used to upscale old or damaged video recording to make it easier to see or provide a higher resolution. No deception involved.===

    Just label the deception. Again, easy


  21. - Oswego Willy - Tuesday, Sep 26, 23 @ 12:58 pm:

    If it’s such a positive, why not just embrace the label?

    Maybe that’s the question,


  22. - Montrose - Tuesday, Sep 26, 23 @ 12:59 pm:

    “AI is a tool.”

    It is a tool, but you can’t say that as though all tools are the same. They just aren’t. How we use/deal with/regulate AI will evolve over time as it’s prevalence grows and people become more accustom to it. Right now, a label that let’s folks know its the tool that’s being used (folks are already aware words are being used) seems like a reasonable, helpful step.


  23. - TheInvisibleMan - Tuesday, Sep 26, 23 @ 1:01 pm:

    “You do know what context we are discussing, no?”

    Yes. The context of the question is;

    “Should the Illinois legislature vote to require any political ad that uses AI to be clearly labeled as being AI generated?”

    Nothing in there defines it as only being required for deceptive ads.

    Again. You are bringing your assumption of ‘always negative’ as a default. It’s not a default, and you seem to be getting angry about questioning that assumption and not acknowledging a tool can be either good or bad. Labeling for the use of a tool alone accomplishes nothing - **UNLESS** you already come to the table with the assumption that anything AI is automatically bad. That’s a false assumption no supported by facts.


  24. - Just Another Anon - Tuesday, Sep 26, 23 @ 1:01 pm:

    I just think AI shouldn’t be a thing. We’ve been warned for years about the potential for dangers of AI, from Asimov to the Terminator. I say the same thing about cloning. Crichton had it correct, some people are so busy thinking about if something can be done, that they don’t ask if it should.


  25. - DuPage Saint - Tuesday, Sep 26, 23 @ 1:02 pm:

    I would require it on any broadcast ad for almost anything especially if a medical commercial. As to the political ones it should require a voice over in beginning stating this is AI generated and again at end


  26. - Oswego Willy - Tuesday, Sep 26, 23 @ 1:02 pm:

    ===Again, you are bringing the automatic assumption of it always being negative as a justification.===

    If it’s a positive, why the pushback on the label?

    ===“Parental Advisory” labels worked very well, just in the opposite way as the advocates intended it to. It became a badge to seek out, not to avoid.===

    Then those so bent in using AI should hope for the label to generate buzz, no?

    This last…

    ===AI is a tool. Just like words are.

    Maybe we should label all ads using words? They can be used to deceive as well. We don’t do that, because words are a tool, deception is only one way to use that tool.===

    Deception is the use that is the problem, if you don’t see deception as a problem with this tool… it’s why your lack of fear of words could be troubling and why any deception ad is bad… and also why this idea of “positive deception” is like … alternative facts”


  27. - Norseman - Tuesday, Sep 26, 23 @ 1:04 pm:

    Yes. Incorporating the reasons used by some many fine commenters here.

    Also, there has been some great points on issues that will need to be addressed.

    (I Robot was on over the weekend. AI consequences from the minds of the entertainment industry.)


  28. - TheInvisibleMan - Tuesday, Sep 26, 23 @ 1:05 pm:

    “as though all tools are the same.”

    All tools are the same.

    Nuclear science is a tool. It can be used for electricity or bombs.

    A knife is a tool. It can be used to cut a birthday cake, or stab an estranged relative in the chest.

    A car is a tool. It can be used to drive to a hospital for the birth of a child, or traffic a minor across state lines for illegal purposes.

    Words are a tool. They can be used to convey information useful to all parties involved, or they can be used to convince someone you are a Nigerian prince and send you money.

    If we labelled a cake;
    “This cake was created with a knife”

    Such a statement would be meaningless, unless you assumed the word knife automatically meant something negative.


  29. - Dupage Dem - Tuesday, Sep 26, 23 @ 1:06 pm:

    voted yes. Unfortunately often times the average person believes what they see. and if they are not given any heads up that this is AI, what they see can be even more lies or half truths than you normally see in campaign ads.


  30. - Oswego Willy - Tuesday, Sep 26, 23 @ 1:07 pm:

    ===Nothing in there defines it as only being required for deceptive ads.===

    Again… Defending purposeful deception with AI for the good of a factual argument is an odd “truth” flex.

    Label it. You use it. Label it. Altered? Label it.

    ===Labeling for the use of a tool alone accomplishes nothing - **UNLESS** you already come to the table with the assumption that anything AI is automatically bad. That’s a false assumption no supported by facts.===

    Again, why the pushback on any label at all? You are assuming it will be taken as negative, something that is not supported by facts, yet… that’s your argument.

    ===getting angry===

    Don’t gaslight me to my alleged status to “anger”, I’m enjoying the defense of my thoughts, but that technique you used doesn’t need a label either.

    :)

    All good, I’m sure you will never ask where I was again, and I can understand, bud.


  31. - snowman61 - Tuesday, Sep 26, 23 @ 1:08 pm:

    Voted yes but please no knee jerk actions to restrict but thought out discussions. I hope this can be done but don’t have high hope that both parties can have a sensible discussion. We will end up with many different types of laws between states that is not good.


  32. - TheInvisibleMan - Tuesday, Sep 26, 23 @ 1:15 pm:

    “Don’t gaslight me”

    Perhaps anger was the wrong word to use, you are correct. Patronizing would probably be more accurate - in asking if I ‘understood the context, no’.

    I understand AI to be a huge universe which involves everything from spell check at the low end, with autocomplete a small step above that. A level above that is ‘color correction’ in a photo if it was taken in full sunlight or maybe moonlight instead. A level right around there is lossless compression of a digital image in jpg format where higher compression is used in some areas of the image more than others based on the content within the image being compressed.

    Yes, I understand the context. Quite fully.

    Labeling any of those things is meaningless - unless your intent is to taint something before anyone even sees it with an assumed negativity. That seems… deceptive.


  33. - Oswego Willy - Tuesday, Sep 26, 23 @ 1:24 pm:

    ===Labeling any of those things is meaningless - unless your intent is to taint something before anyone even sees it with an assumed negativity. That seems… deceptive.===

    So it’s not me, it’s you that see the labeling as negative.

    You wrote this…

    ===Again, you are bringing the automatic assumption of it always being negative as a justification.===

    But also, again, this…

    ===Labeling any of those things is meaningless - unless your intent is to taint something before anyone even sees it with an assumed negativity. That seems… deceptive.===

    It’s your bias to the negative of the work, not my thought to it used negatively that is the concern.

    Words matter.

    So, simple fix. Altered in any AI way, label it.

    The ad on its own will be judged, altered as it is… unless you think AI isn’t altering it at all, because that isn’t the case.


  34. - cermak_rd - Tuesday, Sep 26, 23 @ 1:26 pm:

    I think that AI anything should be labeled if broadcast. Fraud via AI in advertising whether politically or in the sale of canned goods needs to be prosecuted.

    If someone didn’t say something or do something and altered footage shows them saying or doing that then that’s fraud and needs prosecuting harshly to shove it out of our political and all other non-entertainment spaces.


  35. - Bull Durham - Tuesday, Sep 26, 23 @ 1:36 pm:

    I’m not sure even a label would be adequate. Political ads currently have required disclosure, yet it’s generated in tiny fonts that are illegible to many readers/viewers.


  36. - Captain Obvious - Tuesday, Sep 26, 23 @ 1:42 pm:

    Voted yes but I don’t think these ads should be allowed at all.


  37. - TJ - Tuesday, Sep 26, 23 @ 1:43 pm:

    Yes. And heck, all AI depictions of any real person should be mandated to have clearly labeled fake warnings on screen.


  38. - Oswego Willy - Tuesday, Sep 26, 23 @ 1:50 pm:

    ===Nuclear science===

    I’ve yet to find an instance where anything towards “nuclear” is NOT labeled.

    If you have one…

    To the question, and to intellectual property,

    Using altering AI within even a photograph, as used as an example, to alter or enhance, that’s taking the art of that photograph and first changing context to what an artist might want conveyed, or taking the moment in time and changing that to what the artist wanted shown.

    There’s a reason.. actors, writers, artists, even scientists… they don’t want AI in their workplaces or part of their processes unless it’s known as altered, and welcomed towards the art… with consent.

    You take an alter, even in a “good” framing any artist’s work (voice, image, photo, film, digital) that should be a given that those consuming are being given AI. Intellectual property isn’t arbitrary public “base points” unless the creator has a say (or should have that say)


  39. - JS Mill - Tuesday, Sep 26, 23 @ 1:53 pm:

    Voted YES. AI takes CGI to a new hemisphere.


  40. - Joe Bidenopolous - Tuesday, Sep 26, 23 @ 1:55 pm:

    =how is using autocomplete being purposely deceptive=

    How do I get these fancy political ads with autocomplete?


  41. - Papa2008 - Tuesday, Sep 26, 23 @ 1:56 pm:

    Voted no. Why bother? Still won’t be able to tell if they’re telling the truth about that or not. Just like it is now.


  42. - TheInvisibleMan - Tuesday, Sep 26, 23 @ 2:01 pm:

    “where anything towards “nuclear” is NOT labeled. If you have one…”

    Bananas.

    Or does your grocery store label them all as being radioactive from potassium-40?

    We already approach things this way, and there’s your specific asked for example.

    For the same reason, labeling anything as touched by AI no matter what degree it is done is equally meaningless.


  43. - Oswego Willy - Tuesday, Sep 26, 23 @ 2:06 pm:

    ===Or does your grocery store label them all as being radioactive from potassium-40?===

    Has the FDA demanded such a thing?

    Which foods *aren’t* labeled anymore?

    ===For the same reason, labeling anything as touched by AI no matter what degree it is done is equally meaningless.===

    It’s not meaningless, you say so yourself… here…

    ======Labeling any of those things is meaningless - unless your intent is to taint something before anyone even sees it with an assumed negativity.===

    You’re protecting the idea that AI isn’t deceptive.

    Friend, the “A” stands for “Artificial”


  44. - mrp - Tuesday, Sep 26, 23 @ 2:07 pm:

    Is AI generating TheInvisibleMan’s posts?


  45. - TheInvisibleMan - Tuesday, Sep 26, 23 @ 2:12 pm:

    “It’s not meaningless”

    A label would convey no useful information. What it would do, is cause some people who automatically assume anything nuclear at all equals bad - to stop eating bananas.

    The only meaning such a label would have, would be to allow people with an incorrect understanding of a topic to take an action not supported by facts.

    So in a sense, you are correct. The meaning of such a label would be in the hopes of effecting an action among people lacking a full understanding of a topic - which correct me if I’m wrong is exactly the problem you think you would be stopping with such an idea.

    “Parental Advisory”


  46. - Oswego Willy - Tuesday, Sep 26, 23 @ 2:15 pm:

    ===to stop eating bananas.===

    That’s up to the FDA. Is the FDA regulating AI?

    ===The only meaning such a label would have, would be to allow people with an incorrect understanding of a topic to take an action not supported by facts.===

    What’s incorrect? Artificial Intelligence as used. That’s a fact.

    ===The meaning of such a label would be in the hopes of effecting an action among people lacking a full understanding of a topic - which correct me if I’m wrong is exactly the problem you think you would be stopping with such an idea.===

    You want to deceive people that AI wasn’t used?

    That’s what you are saying.


  47. - Jaguar - Tuesday, Sep 26, 23 @ 2:15 pm:

    I can see a time in the not so distant future where all political ads will be AI generated to some extent and it will be assumed . Perhaps a certificate of non AI content would be more valuable.


  48. - TheInvisibleMan - Tuesday, Sep 26, 23 @ 2:16 pm:

    “Is AI generating TheInvisibleMan’s posts?”

    Perhaps I’m just an AI programmed with a LLM heavily reliant on Alan Watts, using OW’s posts as the RLHF feedback.


  49. - TheInvisibleMan - Tuesday, Sep 26, 23 @ 2:19 pm:

    “Perhaps a certificate of non AI content would be more valuable.”

    I would love to see this.

    It would show the impossibility of getting such a certification. If you can get one, go for it.

    Live speeches are about the only thing which would qualify. But where would you post it? In a text image used with AI-assisted graphical processing to print and size it onto paper? Oops. Just lost that certificate.


  50. - Oswego Willy - Tuesday, Sep 26, 23 @ 2:20 pm:

    - TheInvisibleMan -

    People are told they are getting artificial colors and flavors in food… it’s artificial intelligence being fed to consumers…

    If you wanna turn it on it’s artificial ear.

    ===in the hopes of effecting an action among people lacking a full understanding of a topic===

    That’s one heck of a thought.… lol… tell folks… that AI is being used… because people perceive it bad… but don’t tell folks… because keeping them uniformed is being honest to the content?

    Your selling AI as Alternate Facts


  51. - Oswego Willy - Tuesday, Sep 26, 23 @ 2:23 pm:

    ===posts as the RLHF feedback.===

    I guarantee I’d be the one human that could make AI dumber.

    For the sake of mankind, don’t use me for rhetorical feedback.

    :)


  52. - H-W - Tuesday, Sep 26, 23 @ 2:30 pm:

    I voted Yes

    However, this issue is larger than politics. As AI now enters the language of the day, the reality is that most Americans are addicted to the internet in some form or another (e.g., cable TV, Network subscriptions, Google and Wiki, online sources of information including news, online access to knowledge bases, etc.).

    The average American is exposed to external sources of knowledge all the time. And in that context, it would behoove the government to regulate artificially created information that is presented to citizens, in all forms by which it is presented.

    Starting with truthful labeling is an essential first step. But we also need new legislation creating mandates for governmental bodies to oversee and regulate this burgeoning source of potentially harmful information at the federal and state levels. Better to be proactive and reactive, than simply reactive.


  53. - Give Us Barabbas - Tuesday, Sep 26, 23 @ 2:39 pm:

    Why this is much more dangerous than the old photoshop tricks like darkening skin color or retouching a still image. (see video) This video is old; the tech has become frighteningly more realistic since it was made… You can see the potential for a political rival to attack from the shadows and “leak” a fake with false information, run comments thru the news cycle a few times to generate buzz, then stand back and watch the conspiracy nuts take the ensuing chaos to higher levels, making for a lot of views, and no matter how many times it gets debunked, some neuro-atypical, suggestible voters will believe it and act on it. Perhaps, in deadly ways. https://youtu.be/gLoI9hAX9dw


  54. - Lurker - Tuesday, Sep 26, 23 @ 2:41 pm:

    I voted yes but to me this misses the real problem that perpetuates. People can lie and then hide behind freedom of speech. To me, a better answer is anytime a picture is altered in anyway or the narrative surrounding it is made up, then it needs to be labeled as fictional and if not, the punishments need to be real (starting at a minimum of $100,000 and one month in jail would be my preference).


  55. - don the legend - Tuesday, Sep 26, 23 @ 2:52 pm:

    I voted yes because: “AI is a tool.”
    Just like a match and a flame thrower are tools. Both produce fire but ……


  56. - Homebody - Tuesday, Sep 26, 23 @ 3:24 pm:

    The First Amendment jurisprudence needs to be revisited. Lying is protected speech, based on precedent from a time long before you could generate photorealistic animations of famous people saying things they didn’t say. I don’t think those are deserving of free speech protections any more than commercial fraud is.


  57. - Amalia - Tuesday, Sep 26, 23 @ 3:43 pm:

    yes. out the fake.


Sorry, comments for this post are now closed.


* Reader comments closed for the weekend
* Isabel’s afternoon roundup
* The Waukegan City Clerk was railroaded
* Whatever happened, the city has a $40 million budget hole it didn't disclose until now
* Manar gives state agencies budget guidance: Cut, cut, cut
* Roundup: Ex-Chicago Ald. Danny Solis testifies in Madigan corruption trial
* Open thread
* Isabel’s morning briefing
* SUBSCRIBERS ONLY - Today's edition of Capitol Fax (use all CAPS in password)
* Live coverage
* Selected press releases (Live updates)
* Yesterday's stories

Support CapitolFax.com
Visit our advertisers...

...............

...............

...............

...............

...............

...............


Loading


Main Menu
Home
Illinois
YouTube
Pundit rankings
Obama
Subscriber Content
Durbin
Burris
Blagojevich Trial
Advertising
Updated Posts
Polls

Archives
November 2024
October 2024
September 2024
August 2024
July 2024
June 2024
May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004

Blog*Spot Archives
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005

Syndication

RSS Feed 2.0
Comments RSS 2.0




Hosted by MCS SUBSCRIBE to Capitol Fax Advertise Here Mobile Version Contact Rich Miller