Privacy Policy
Welcome to SplashArk. We are committed to protecting your privacy and ensuring the security of your personal information. This Privacy Policy outlines how we collect, use, disclose, and protect your information when you use our website, mobile application, and related services (collectively, the services)
Information We Collect
When you use our Services, we may collect the following types of information:
Personal Information :
This may include your name, email address, phone number, date of birth, gender, profile picture, and other information you provide to us.
User Content :
Any content you create, upload, post, or share on our platform, including photos, videos, comments, and messages.
Usage Information :
We collect information about how you interact with our Services, such as your browsing history, search queries, IP address, device information, and cookies.
Payment Information :
If you make purchases or subscribe to premium features, we may collect payment details, including credit card information and billing address.
Location Information :
With your consent, we may collect precise or approximate location data when you use location-based features
Social Media Data :
f you make purchases or subscribe to premium features, we may collect payment details, including credit card information and billing address.
How We Use Your Information
We use the information we collect for the following purposes
- To provide and personalize our Services, including recommendations, content suggestions, and advertisements
- To communicate with you about your account, updates, promotions, and news.
- To improve and optimize our Services, develop new features, and conduct research and analysis.
- To process transactions, payments, and subscriptions.
- To comply with legal obligations and enforce our Terms of Service and other policies.
Information Sharing and Disclosure
We may share your information with third parties in the following circumstances :
- With service providers and partners who assist us in operating, maintaining, and improving our Services.
- With advertisers and advertising networks to display relevant ads and measure their effectiveness.
- With law enforcement agencies, regulators, or other parties in response to legal requests or to protect our rights, property, and safety, or the rights, property, and safety of others.
- With your consent or at your direction, including when you choose to share information publicly or with specific users.
Data Retention
Security
Your Choices
You have the following rights and choices regarding your information:
- You can access, update, or delete your account information and preferences at any time by logging into your account settings.
- You can opt-out of receiving promotional emails by following the instructions in the email or contacting us directly.
- You can manage your cookie preferences through your browser settings or device settings.
- You can choose not to provide certain information, but this may limit your ability to use certain features of our Services.
Children's Privacy
Our Services are not intended for children under the age of 13, and we do not knowingly collect personal information from children under 13. If you believe we have inadvertently collected information from a child under 13, please contact us immediately.
Changes to this Privacy Policy
We may update this Privacy Policy from time to time to reflect changes in our practices or legal requirements. We will notify you of any material changes by posting the updated Privacy Policy on our website or through other communication channels.
Contact Us
We may update this Privacy Policy from time to time to reflect changes in our practices or legal requirements. We will notify you of any material changes by posting the updated Privacy Policy on our website or through other communication channels.
SplashArk Child Safety Standard
A Zero-Tolerance Policy Against Child Sexual Abuse and Exploitation (CSAE)
1. Core Principle: Zero-Tolerance and Safety by Design
SplashArk maintains a zero-tolerance policy for any activity involving or facilitating Child Sexual Abuse and Exploitation (CSAM & CSAE), including the creation, distribution, or solicitation of CSAM. Our commitment to child safety is a non-negotiable cornerstone of our platform. This standard is designed not as a reactive policy, but to be deeply integrated into the architecture of our platform, not added as an afterthought. This policy applies comprehensively across all platform functions: social media, e-commerce, service booking, and Pay-Per-View (PPV) content.
We are committed to:
- Preventing our platform from being used to harm children.
- Detecting and removing abusive content and bad actors proactively.
- Reporting all instances of suspected CSAM and CSAE to the relevant authorities, specifically the National Center for Missing & Exploited Children (NCMEC).
- Responding to victims and our community with robust support and clear reporting channels.
2. Platform-Wide Governance and Accountability
Designated Child Safety Officer (CSO):
A senior-level CSO (or equivalent Head of Trust & Safety) is appointed with the authority and resources to implement, maintain, and enforce this policy.
Specialized Moderation Team:
We will maintain a dedicated, well-trained, and psychologically-supported content moderation team. This team will include:
- Proactive Moderation: Human moderators who review user reports and AI-flagged content within industry-standard timeframes.
- Reactive Moderation: A 24/7 team to review user reports and AI-flagged content within industry-standard timeframes (e.g., CSAM reports reviewed within 24 hours or less for any PPV content uploads).
Cross-Functional Team:
The CSO will lead a cross-functional team with members from Legal, Engineering, Product, and Customer Support to ensure all new features undergo a Child Safety Risk Assessment prior to launch.
Transparency Reporting:
We will publish an annual Transparency Report detailing the volume of CSAM detected, accounts removed, and reports filed with NCMEC.
3. Prevention and Detection: A Tiered Technological Approach
We deploy a layered defense system that integrates automated technology with expert human review.
Content-Level Detection (CSAM & Harmful Content)
CSAM Hashing: All uploaded images and videos (including in chats, profiles, listings, and PPV content) are automatically scanned against industry-standard CSAM hash databases (e.g., NCMEC, IWF) using technology like PhotoDNA. Any match is blocked from appearing, the content is preserved securely, and a report is generated to NCMEC.
AI-Driven Content Classification: We use machine learning (ML) classifiers to proactively scan for new and unknown CSAM, as well as content depicting nudity, sexual activity, and other policy violations.
Keyword & Slang Filtering: Our systems scan all text (chats, comments, listings) for a dynamic, evolving database of high-risk phrases, emojis, and coded text (e.g., "phone," "noodle emoji," "CD9") associated with grooming, solicitation, and trafficking. This list is informed by experts like the Australian Centre for Child Protection Technology and the CSAM Keyword Hub.
3.2 Behavioral-Level Detection (Grooming & Predation)
Our AI models are trained to detect behavioral patterns indicative of grooming, even when no single keyword is used:
Grooming Characteristics: The system flags conversations that match known grooming patterns, such as "Asking for Profile Info," "Attempting to Isolate," "Offering Money," "Requesting Pictures."
Noxious Behavior: We monitor for suspicious account behaviors, including:
- An adult user attempting to in-personalize a child.
- A user sending a high volume of unsolicited messages to users who appear to be minors.
- Attempts to move a conversation to a less-secure, off-platform app (e.g., "what's your Kik/WhatsApp?").
- A user's account identifiers (phone, email, IP address) being linked to known high-risk or adult-only sites.
4. E-Commerce & Service Booking Safety Standards
This section addresses the unique risks inherent in SplashArk's commercial and in-person service functionalities.
4.1. Tiered Provider Vetting (KYC/KYP)
We employ a risk-based, tiered vetting system for all sellers and service providers. Vetting requirements increase based on the potential risk of harm:
Tier 1 (Casual E-commerce): Users selling digital content or PPV access.
Requirement: Verified email and phone number.
Tier 2 (Digital Goods & PPV): Users selling digital content or PPV access.
Requirement: Full Know Your Provider (KYP) includes name, date of birth, and physical address.
- Submission of a valid, unexpired government-issued photo ID (e.g., driver's license, passport).
- A live biometric selfie to match the user to the ID.
- Verification of a payout method (e.g., bank account).
Tier 3 (In-Person Services): Users offering in-person services (e.g., tutoring, cleaning, photography, handyman).
Requirement: All Tier 2 requirements plus:
- Mandatory U.S. Background Check: This includes a criminal records search (county, state, federal) and a search of the National Sex Offender Registry.
- Clear display of a "Vetted Provider" badge on their profile.
4.2. E-Commerce Monitoring (Transactions & Listings)
Our Trust & Safety team actively monitors for trafficking and exploitation indicators based on guidance from financial crime services like FinCEN, AUSTRAC, and AusTRAC:
Financial Transaction Red Flags:
- Transactions at unusual hours or in rapid, repetitive patterns.
- Immediate withdrawal of funds after a sale.
- Use of prepaid cards, wire transfers, or third-party payment processors.
- A series of low-value, innocuous-looking transactions (e.g., "uniforms") combined with purchases of high-risk items.
Product & Service Listing Red Flags:
- Suspicious Job Offers (e.g., "modeling," "acting," or "personal assistant" with vague descriptions and "too good to be true" pay).
- Obfuscated Text: Use of coded characters to hide phone numbers or high-risk keywords.
- Personalized Descriptions: Listings for items that include descriptions, or images, indicating child, copy-pasted activity.
- Pricing Anomalies: An inflated "buy-it-now stock" listed for an exorbitant price (e.g., $1,000), which can be a front for illicit transactions.
Minor Seller Policy: A user under 18 who is not verified to become a seller requires verified parental consent and identity verification for both the minor and their legal guardian.
Transaction Monitoring: The platform operates a real-time transaction monitoring system to detect and flag e-associated activity (as identified by financial crime entities like FinCEN). Red flags include:
Pattern Red Flags: A seller receiving multiple, small-dollar payments from many different buyers with no clear product; a sudden, unexplained spike in sales for a new seller.
Behavioral Red Flags: Impressions or transactions by sellers from unrelated third parties; payment activity that is inconsistent with the seller's stated business.
High-Risk Payments: Use of third-party payment processors, P2P payment methods, or CVCs that attempt to obscure the source of funds.
Prohibition on Adult-Minor Payments: Users under 18 are prohibited from sending money to, or receiving money from, any adult user not in their Family (as defined by our parental guardian account link).
5. Pay-Per-View (PPV) & Adult Content Standards
All PPV content creators are subject to the Tier 2 Vetting Standard at minimum. For creators uploading adult content, we also adhere to 18 USC 2257 (or its equivalent). The creator warrants (legally promises) that they have:
- Obtained a signed, dated consent and release form from every individual (participant) appearing in the r PPV content.
- Obtained and securely retained a copy of a valid, government-issued photo ID for every participant verifying that they are 18 years of age or older.
Auditing Rights: SplashArk reserves the right to audit any adult content creator at any time. Upon request, the creator must provide documentation of consent and ID for all participants in their content. Failure to provide this documentation will result in immediate account termination and forfeiture of funds.
6. Reporting, Response, and Victim Support
Clear & Accessible Reporting:
A "Report" button is available on all user profiles, messages, listings, and content. The reporting flow allows users to easily categorize the issue as "Child Safety Concern."
Prioritized Triage:
All reports related to child safety are routed to a high-priority queue for immediate 24/7 review by our specialized team.
Immediate Action Protocol:
Confirmed CSAM: The content is immediately removed, the user's account is suspended, the content is preserved, and a report is filed with NMCEC.
Suspected Grooming: The user is suspended pending investigation. If grooming is confirmed, the user is permanently banned, and a report is filed with law enforcement.
Underage User: The user's account is immediately suspended. We provide a path of appeal (e.g., submitting an ID) to prove age.
Victim Support:
We will provide clear, on-platform links to resources for victims and families, including NCMEC's CyberTipline and StopNCII.org (to prevent the spread of non-consensual images).
