NEW Tool:

Use generative AI to learn more about data.world

Product Launch:

data.world has officially leveled up its integration with Snowflake’s new data quality capabilities

PRODUCT LAUNCH:

data.world enables trusted conversations with your company’s data and knowledge with the AI Context Engine™

PRODUCT LAUNCH:

Accelerate adoption of AI with the AI Context Engine™️, now generally available

Upcoming Digital Event

Are you ready to revolutionize your data strategy and unlock the full potential of AI in your organization?

View all webinars

Acceptable Use Policy

Last Updated: October 2024

This Acceptable Use Policy (“Policy”) governs your use of data.world. 

This Policy applies to Members and Users. “Members” refer to individuals or business entities who register for an account with data.world on the Site through the open side of the data.world platform. “Users” refer to those individuals who work for an enterprise customer or prospect of data.world and who have been provided access by data.world to the Services on behalf of an enterprise customer or prospect.

Applies to All Members and Users

Members and Users may not use data.world to:

  1. Conduct illegal or fraudulent activity

  2. Violate the rights of others

  3. Use or generate discriminatory language

  4. Condone or encourage violence against people or groups or other harassment of any kind

  5. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct

  6. Post or generate sexually explicit or violent, harmful, or cruel material

  7. Post or generate any content or activity that promotes child sexual exploitation or abuse

  8. Upload or generate nudity, pornographic, violent, or hateful imagery

  9. Post (or threaten to post) other people’s personally identifying information (doxing)

  10. Post or generate personal insults, especially those using racist or sexist terms

  11. Cause or contribute to an atmosphere that excludes or marginalizes others

  12. Post (or threaten to post) or generate information related to suicidal or self-injurious behaviors

  13. Upload or generate content that promotes false, harmful, or misleading information

  14. Threaten, incite, promote, or actively encourage violence, terrorism, or other serious harm

  15. Advocate for, glorify, or encourage, any of the above behavior

  16. Harass others

  17. Violate the security, integrity, or availability of any user, network, computer or communications system, software application, or network or computing device

  18. Send unsolicited email, SMSs, or "spam" messages or other promotions, advertising or solicitations or create accounts, datasets or projects solely to increase your search engine results

  19. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices

  20. Impersonate another individual without consent, authorization, or legal right

  21. Conduct activities that present a risk of death or bodily harm to yourself or others including, but not limited to:

    1. Weapons development 

    2. Property destruction

    3. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Regulations maintained by the United States Department of State

    4. Illegal drugs and regulated/controlled substances promotion, manufacture, or distribution

    5. Operation of critical infrastructure, transportation technologies, or heavy machinery

This Acceptable Use Policy is not intended to be comprehensive. For additional information please review our Terms of Use or your company’s enterprise software and services terms.

To report a violation of this policy, please open a ticket at https://help.data.world to let us know. 

We take non-compliance with this Policy seriously. Infractions will be dealt with on a case-by-case basis including termination of access to data.world.

Applies to Users Only

If using an AI feature provided by data.world including, but not limited to, the AI Context Engine Application or Archie (collectively, “AI Products”), Users may not:

  1. Use the AI Products to make decisions impacting individual rights or well-being in areas such as finance, employment, healthcare, housing, insurance, and social welfare

  2. Disable, evade, disrupt, or interfere with any content filters or safety systems that are part of any AI Product

  3. Misrepresent the origin of any response or result from any AI Product (“Output”) 

  4. Claim or imply such Output was created by a human or representing the Output as an original work

  5. Remove any markings that indicate an Output is AI-generated

  6. Use the AI Products to facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights

  7. Use an AI Product without first determining if its use is effective and safe

  8. Use the AI Products for (i) prohibited practices under applicable laws such as the European Union Artificial Intelligence Act; (ii) any use that would result in the AI Products being declared a high-risk AI system or otherwise qualifies as “high-risk” under any applicable law; or (iii) any use that is sensitive, critical, unsafe, high-risk, or hazardous (including any use that could result in death or serious bodily injury, catastrophic damage, warfare, or the operation of critical infrastructure)

  9. Use any AI Products without appropriately disclosing to end-users any known risks or dangers of its use of any AI Products

  10. Use the AI Products or any Output to aid in developing, training, re-training, fine-tuning, testing, improving, or enhancing products or services that compete with the AI Products, including other artificial intelligence models

Users acknowledge:

  1. the results generated by the AI Products do not involve direct human involvement.  

  2. the AI Products will generate Outputs created by artificial intelligence technology that must be thoroughly reviewed and validated by the User before relying on it.

  3. the AI Products may provide Output that is inaccurate, misleading, unreliable, or upsetting.

  4. the AI Products may produce the same or similar Output to other customers. If the User receives a claim or a notice of a claim that the Output violates a third party’s intellectual property rights, User will promptly stop using, displaying, or distributing the Output. User shall not use or communicate to third parties any Output in breach of any applicable law.

  5. User is responsible for any decisions, actions, and/or inactions arising from User’s use of any AI Products, including ensuring compliance with applicable laws, regulations, and other legal requirements.

If the User decides to use any Services or functionality that may be made available to User to try at its option at no charge or which is designated as beta, pre-release, private preview, pilot, limited release, developer preview, non-production, evaluation, or by a similar description (collectively, the “F/B Services”), such use is subject to the following:

  1. The F/B Services are intended for evaluation purposes and not for production use, may not be supported, and may be subject to supplemental terms.

  2. The F/B Services are subject to change at any time, including their availability.

  3. The F/B Services are not subject to any warranty or indemnification obligations on the part of data.world and are provided on an “AS IS” basis and “AS AVAILABLE.”

Applies to Members Only

Members shall: 

  1. Only upload content to data.world that Member has the legal right to upload. 

  2. Follow the requirements of the license for datasets the Member does not own or create (which usually requires clear attribution). 

  3. Select the proper licensing setting on data.world to reflect the license from the original data source, if applicable.

  4. Not make datasets public if they contain personal information. Some examples of this type of information include datasets containing Personally Identifiable Information (PII), Protected Health Information (PHI), or Personal Financial Information (PFI).

    1. PII (Personally Identifiable Information)  is information that either alone or in combination with other information could be used to identify, locate, or contact an individual. Some examples include: a person’s name, a person’s address, a person’s email address, a person’s social security number, or a person’s credit card number.

    2. PHI (Protected Health Information) is information about health conditions, treatments or procedures a person has undergone, or payments someone has made or collected that can be linked to a specific person.

    3. PFI (Personal Financial Information) is information about financial condition, transactions, or activities that a person has undertaken that can be linked to a specific person.

chat with archie icon