Talk for Article "New report warns of malicious use of AI"

Talk about this Article

  1. [ This comment is from a user you have muted ] (show)

    I suppose really the ethics of artificial intelligence deserves it’s own dedicated section.

    I am wary when governments speak about the need legislate or regulate the development of AI, it seems like any ‘misuse’ of the technology also happens to be anything that weakens the authority of the office they hold.

    How bad could realistic videos for political manipulation be if they manipulated us to recycle vociferously for example?

    If government’s legislate too quickly or too harshly, they could kill off the research into artificial intelligence and the hype around the potential uses of AI seems to make government ever eager to bring out the bureaucratic sledgehammer.

  2. [ This comment is from a user you have muted ] (show)

    Hi, just while i get my bearings with WikiTribune: Is this an article about the report in the title, or about pros and cons of AI in general?

    If the latter then i may have things to contribute about the sources of values, goals and preferences in AIs, especially if any are given the freedom (or develop the ability) to pursue sub goals of securing resources to further their primary goals.

    Is it possible to invite someone like Nick Bostrom to contribute to a story like this? That is a more general question about inviting experts to contribute, and about how the crowd sourcing of wikis actually work.

    1. [ This comment is from a user you have muted ] (show)

      Hello Jon, the article, or stub rather, is just about the report, which you can add to.

      If you wish to contribute to the pros and cons, feel free to create your own story, I’d love to read it 🙂 We are totally OK with inviting experts like Nick Bostrom as long as he’s happy to join.

      1. [ This comment is from a user you have muted ] (show)

        Thanks for those two answers Linh. Much appreciated.

        ‘How wikis might be influenced by AIs’ might be an interesting topic for wikiTribune, either related to the subject of AIs, but also within the broader, pertinent topics of truth, rhetoric, narratives, history writing, fake news, propaganda, confirmation biases, marketing… in an age of instant dissemination and replication of information.

  3. [ This comment is from a user you have muted ] (show)

    I’m pretty sure that what you did there was use a second example of the kind of issue I highlighted to support the first. There was nothing ‘Intelligent’ about the tool used in the ‘fake Obama’ video – it was just a complex video editing tool using a big data pool to match words to video (badly).
    AI is a very interesting topic – but allowing PR flaks to label any complex software as ‘AI’ without challenge is not helpful – it will end up driving general indifference when time shows it to be self interested nonsense just like the very similar hype over the ‘millennium bug’ did nearly 20 years ago.
    I honestly think that a critical look at this kind of press release could be really helpful.

    1. [ This comment is from a user you have muted ] (show)

      As stated on the article website (http://grail.cs.washington.edu/projects/AudioToObama/), the tool used Recurrent Neural Networks (https://en.wikipedia.org/wiki/Recurrent_neural_network) which is clearly an AI technique. I believe that the question whether AI techniques used for video editing (most modern video editing software is filled with AI tech) is still “AI” is beyond the scope of this article (something that has to be talked about within academia itself). I share the irritation on the labeling of “AI” but in this case i think it is okay.

      1. [ This comment is from a user you have muted ] (show)

        I’m sorry – I just don’t buy it. The use of a Recurrent Neural Network is simply a massive trail and error process to find the least-worst solution to a problem, it is no more intelligent than the goal seeking function in Microsoft Excel, it just uses more data and more complex data.

        If Wikitribune is going to uncritically publish PR puff pieces like this it just becomes another standard news outlet – the ability of the community to critically analyse this kind of press release should not be missed, it offers a great chance to make people writing this kind of release raise their game and really contribute to a sensible debate.

        Dumb lines like “The authors warn that AI could potentially turn drones into weapons ” are begging for critical response….

  4. [ This comment is from a user you have muted ] (show)

    It seems to me that this is one of those press releases about a ‘current issue’ that is put out by a body of one kind or another with the primary objective of giving the body in question a bit of publicity rather than advancing the debate on the substantial issue and it should be called out for such.

    Unless you take a ridiculously broad view of what ‘AI’ is there is no more way to cause harm with ‘AI’ video editing software than with regular video editing software – in fact how can video editing software ever use ‘AI’ technology?

    1. [ This comment is from a user you have muted ] (show)

      Hi Roger, please go to this link of the infamous fake Obama speech to see how videos can be faked using AI technology: http://www.bbc.co.uk/news/av/technology-40598465/fake-obama-created-using-ai-tool-to-make-phoney-speeches

      The artificial intelligence studied thousands of videos of Obama to learn his speech in order to produce a video that’s realistic but fake. A regular video software cannot do that as well.

Subscribe to our newsletter

Be the first to collaborate on our developing articles

WikiTribune Open menu Close Search Like Back Next Open menu Close menu Play video RSS Feed Share on Facebook Share on Twitter Share on Reddit Follow us on Instagram Follow us on Youtube Connect with us on Linkedin Connect with us on Discord Email us