Contributed by WSTA Board & Committee Members

On February 20th, more than 80 financial technology & business professionals met at the Yale Club of NYC for a morning session focused on new technologies changing the capabilities of analytics & AI, and how these technologies will impact financial services firms.

Many great ideas & interesting perspectives were shared during this panel discussion, but here are some of the top takeaways provided by some of our Board & Committee Members in attendance. You can check out the agenda here.

Randall Brett,
WSTA Director

1. Data quality and Data Governance are key. Garbage In/Garbage Out (and in the case of AI Garbage In/Nuclear War) are more important than ever.
2. The huge surge of data and the availability of inexpensive compute at scale are making Machine Learning possible for many new kinds of tasks.
3. Risk Management is crucial. The higher the value of the “decisions”, (e.g., running a report at the low end of value, life or death, e.g., self-driving cars at the other) the more important the vetting and testing.

John Checco,
Special Advisor Committee

1. Ditch the generic AI/ML moniker for every problem. AI platforms for chatbots, communications, trending reports, decision systems, and ad-hoc queries should not be grouped into a single bucket nor judged by the same criteria.
2. Embrace the mindset of “Narrow AI”. AI and machine learning solutions are not all-encompassing; rather each problem (or project) should be looked at independently, and tweaked to get the best results for that problem set.
3. The old adage of “garbage in, garbage out” is exacerbated by AI/ML. Instead of spending invaluable resources on attempting to clean or obtain perfect data, focus on the thresholds at which dirty or incomplete data starts to skew the AI/ML results by pre-production testing any solution with sets synthetic data representing perfect, good, dirty, and incomplete data sets.

Shyamal Sen,
Content Advisory & Seminar Content Review Committees

1. Full-functionality digital co-workers are going to be part of the future standard workflow. How do we pick the candidates?
2. Privacy laws may be violated in data mining & social media data in trying to detect synthetic fraud
3. A lot of times AI intelligence is not biased, but humans who define the problem or scoped/shaped the data are biased. Testing should be comprehensive and decision execution should be moderated.


Want more fintech insights? There’s nothing better than attending upcoming WSTA Panel Discussions & Seminars!