Connectors link Summand to a live data source so analysis stays current without manual re-uploads. Pick a table, hit analyze, and Summand handles ingestion, semantic layer computation, and surprise finding end-to-end.Documentation Index
Fetch the complete documentation index at: https://docs.summand.com/llms.txt
Use this file to discover all available pages before exploring further.
Supported sources
- Databricks (Delta Sharing) — connect via a Delta Sharing credential profile to read tables out of a Databricks Unity Catalog share.
- Azure SQL — host/port/database connection to an Azure SQL server with SQL or Entra credentials.
- Snowflake — account-based connection with warehouse and credential selection.
- CSV upload — one-shot file upload for ad-hoc analysis without a live source.
Databricks (Delta Sharing)
Summand connects to Databricks through the open Delta Sharing protocol — no JDBC driver, no cluster required. Provide the credential profile (the JSON file Databricks generates when you create a recipient) and Summand will:- List shares available to the credential.
- List schemas under the chosen share.
- List tables under the chosen schema.
- Read the selected table on every refresh.
Azure SQL
Azure SQL connectors take a host, port, database, and credentials, then expose the same schema browser and table picker as the other database connectors. Summand reads the table on each refresh; nothing is mutated on your server.What a connector gives you
- Schema discovery. Browse databases, schemas, and tables before committing to one.
- Table selection. Pick a single table per dataset; the schema is captured at ingestion time.
- Validation. Summand sanity-checks the table (row count, column types, target column eligibility) before kicking off a run.
- Scheduled refresh. Hourly, daily, or weekly recomputation. Scheduled refresh skips ingestion when the source hasn’t changed and re-runs only the semantic layer, which is roughly 80% cheaper than a full reload.
- Sharing. Connectors can be shared with teammates so they can build datasets off the same source without re-entering credentials.

