![]() The roundtable mentions an incident when the detector dubbed Deepfake-o-meter was made public for the first time. Legal attempts at depriving journalists, fact-checkers, activists, as well as regular users of such technologies are also seen as a vital threat. If a deepfake detection tool becomes a privilege of the political authorities, it will automatically lose its legitimacy. Thus, a solution trained on a certain dataset may fail when facing data from a completely different source. The deepfake detection solutions are trained on different datasets that contain videos and images from various sources: YouTube, Facebook, studio shots, surveillance cameras, and so on. User’s overconfidence can also be a hurdle. Regular users may discard detection results due to poor understanding of how the technology works. Poor tech-literacy and limited access to the internet are among the main obstacles. While proliferating a “home-use” anti-deepfake tool would be easy in the “Global North” - Europe, North America, and other regions - it would be hindered in less developed countries. In the past, a similar concern was raised in regard to passive liveness detection systems. This can be achieved via reverse engineering, trial and error, code tampering, and other practices. While having first-hand access to a deepfake detection tool, fraudsters can learn to bypass and fool it. A roundtable was held by Partnership on AI and nonprofit organization Witness to outline benefits and possible downsides of proliferating this technology. ![]() The first known deepfake software Video Rewrite was applied to a Kennedy’s speechīroad availability of deepfake detection tools has raised opposing opinions in the expert community. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |