Coinbase Tests AI Chatbot, Finds It Inadequate for Security Review
• Coinbase recently tested ChatGPT, an AI chatbot, to assess the accuracy of its token security review.
• Tests found that ChatGPT was not fit due to its inability to correctly identify high-risk assets.
• Despite its inaccuracies, Coinbase acknowledged that ChatGPT showed promise for quickly assessing smart contract risks and will continue investigating its use cases.
Coinbase Testing ChatGPT
American crypto exchange Coinbase recently tested ChatGPT, the artificial intelligence chatbot that has grown popular due to its AI powered solutions. The firm was testing the AI chatbot to determine the accuracy of its token security review.
Results of Testing
The tests found that ChatGPT was not fit due to its inability to correctly identify high-risk assets. The exchange’s blockchain security team ran several prompts on ChatGPT and compared 20 smart contract risk scores between it and a manual security review. Of the 8 misses, 5 were cases where ChatGPT incorrectly labeled a high-risk asset as low risk. This indicated gaps in coverage where additional dependencies went unreviewed.
Potential Benefits
Coinbase acknowledged that despite its inaccuracies,ChatGPT showed promise for improving productivity by quickly assessing smart contract risks. This gave enough potential for them to continue investigating its use cases.
Conclusion
The Coinbase team concluded that the AI tool could not be solely relied upon for performing a security review and should only be used with appropriate context in order to provide accurate results. Despite this, they will continue researching into how best integrate it into their process moving forward.
Takeaway
AI tools are difficult to implement effectively as they require extensive research and testing before being integrated into processes like security reviews at exchanges such as Coinbase as they can come with inaccurate results if not handled properly