EverCommerce - Senior Data Engineer

Other Jobs To Apply

<div><div><p><b><span>About<span> </span></span><span>EverCommerce</span></b><span> </span></p></div><div><p>At EverCommerce [Nasdaq: EVCM], we are on a mission to digitally transform the service economy with tailored, end-to-end SaaS solutions that simplify and empower the lives of our 725,000+ customers.  As a leading service commerce platform, our modern digital and mobile applications create predictable, informed, and convenient experiences between customers and their service professionals in the areas of Home & Field Services, Health Services, and Wellness industries.</p><p>We are building an extraordinary company and looking for talented, energetic, and motivated people to join our team. You can learn more about our Company, Culture and Values here: <a href="https://careers.evercommerce.com/us/en" target="_blank" rel="noopener noreferrer"><span style="overflow-wrap: break-word; display: inline; text-decoration: inherit; hyphens: auto;">https://careers.evercommerce.com/us/en</span></a> </p></div><div><p><span><span>Data is central to how we build products, drive decisions, and unlock innovation. Our data platform supports analytics, real-time insights, and emerging AI-driven capabilities across the<span> </span></span><span>EverCommerce</span><span><span> </span>ecosystem.</span></span><span> </span></p></div><div><p><span> </span></p></div><div><p><b><span>Role Overview</span></b><span> </span></p></div><div><p><span><span>We are looking for a<span> </span></span></span><b><span>Senior Data Engineer</span></b><span><span><span> </span>to design, build, and scale a modern data platform that supports analytics, real-time use cases, and AI-enabled products.</span></span><span> </span></p></div><div><p><span><span>This is a<span> </span></span></span><b><span>high-impact, hands-on role</span></b><span><span><span> </span>where you will lead the development of robust, scalable data systems, mentor engineers, and partner cross-functionally to deliver trusted, high-quality data. You will also help evolve our platform toward<span> </span></span></span><b><span>automation and intelligent pipeline development</span></b><span><span>,<span> </span></span><span>leveraging</span><span><span> </span>modern tooling and AI where it creates real efficiency.</span></span><span> </span></p></div><div><p><span> </span></p></div><div><p><b>Responsibilities:</b></p></div><div><ul><li><p><span><span>Design, build, and<span> </span></span><span>operate</span><span><span> </span></span></span><b><span>scalable batch and streaming data pipelines</span></b><span> </span></p></li><li><p><span><span>Lead architecture decisions for<span> </span></span></span><b><span>Lakehouse-based data platforms</span></b></p></li><li><p><span><span>Develop and orchestrate workflows using<span> </span></span></span><b><span>Apache Airflow</span></b><span> </span></p></li><li><p><span><span>Build transformations and analytics-ready datasets using<span> </span></span></span><b><span>DBT</span></b><span> </span></p></li><li><p><span><span>Develop and<span> </span></span><span>maintain</span><span><span> </span></span></span><b><span>real-time pipelines using Kafka</span></b><span> </span></p></li><li><p><span><span>Leverage<span> </span></span></span><b><span>Databricks</span></b><span><span><span> </span>for large-scale data processing and advanced analytics</span></span><span> </span></p></li><li><p><span><span>Design and<span> </span></span><span>optimize</span><span><span> </span>storage using<span> </span></span></span><b><span>Apache Iceberg and Lakehouse architecture</span></b><span> </span></p></li><li><p><span><span>Ingest and manage data from diverse sources using tools like<span> </span></span></span><b><span>Fivetran</span><span><span> </span>managed data lake</span></b><span> </span></p></li><li><p><span><span>Build and<span> </span></span><span>maintain</span><span><span> </span>a<span> </span></span></span><b><span>semantic layer</span></b><span><span><span> </span>for trusted reporting and self-service analytics</span></span><span> </span></p></li><li><p><span><span>Implement<span> </span></span></span><b><span>data quality frameworks</span></b><span><span>, observability, and automated testing</span></span><span> </span></p></li><li><p><span><span>Optimize</span><span><span> </span>performance, scalability, and cost across<span> </span></span></span><b><span>AWS services</span></b><span><span><span> </span>(Athena, EC2, etc.)</span></span><span> </span></p></li><li><p><span><span>Partner with BI, product, and engineering teams to deliver actionable data solutions</span></span><span> </span></p></li><li><p><span><span>Mentor junior engineers and contribute to<span> </span></span></span><b><span>engineering best practices and standards</span></b><span> </span></p></li><li><p><span><span>Drive improvements in<span> </span></span></span><b><span>developer productivity and pipeline reliability</span></b><span> </span></p></li></ul></div></div><div><div><p><span> </span></p></div><div><p><b>Skills and Qualifications needed for this role:</b></p></div><div><ul><li><p><span><span>7+ years of experience in Data Engineering or related field</span></span><span> </span></p></li><li><p><span><span>Strong<span> </span></span><span>proficiency</span><span><span> </span>in<span> </span></span></span><b><span>Python and SQL</span></b><span> </span></p></li><li><p><span><span>Deep experience with<span> </span></span></span><b><span>Apache Airflow</span></b><span><span><span> </span>and workflow orchestration</span></span><span> </span></p></li><li><p><span><span>Expertise</span><span><span> </span>in<span> </span></span></span><b><span>DBT</span></b><span><span><span> </span>for data transformation and modeling</span></span><span> </span></p></li><li><p><span><span>Strong h</span><span>ands-on<span> </span></span><span>experience with<span> </span></span></span><b><span>Databricks</span></b><span> </span></p></li><li><p><span><span>Strong experience building<span> </span></span></span><b><span>streaming pipelines (Kafka or similar)</span></b></p></li><li><p><span><span>Strong h</span><span>ands-on<span> </span></span><span>e</span><span>xperience with data ingestion tools such as<span> </span></span></span><b><span>Fivetran</span></b><span> </span></p></li><li><p><span><span>Hands-on experience with building<span> </span></span></span><b><span>automated QA,<span> </span></span><span>monitoring</span><span><span> </span>and<span> </span></span><span>observability</span><span><span> </span>for data lake / lake house</span></b></p></li><li><p><span><span>Solid understanding of<span> </span></span></span><b><span>Lakehouse architecture and Apache Iceberg</span></b><span> </span></p></li><li><p><span><span>Experience implementing<span> </span></span></span><b><span>data quality, testing, and observability frameworks</span></b><span> </span></p></li><li><p><span><span>Familiarity with<span> </span></span></span><b><span>AWS</span><span><span> </span>ecosystem</span></b><span><span><span> </span>(Athena, EC2, S3, etc.)</span></span><span> </span></p></li><li><p><span><span>Strong foundation</span><span><span> </span>in<span> </span></span></span><b><span>data modeling and semantic layer design</span></b><span> </span></p></li><li><p><span><span>Proven ability to<span> </span></span></span><b><span>design scalable systems and influence technical direction</span></b><span> </span></p></li></ul></div></div><div><div><p><span> </span><b><span> Nice to<span> </span>haves (</span><span>Including AI Capabilities)</span></b><span> </span></p></div><div><ul><li><p><span><span>Experience enabling<span> </span></span></span><b><span>AI/GenAI use cases</span></b><span><span><span> </span>on analytics platforms (e.g., Databricks Genie or similar)</span></span><span> </span></p></li><li><p><span><span>Exposure to<span> </span></span></span><b><span>AI-assisted development tools</span></b><span><span><span> </span>for:</span></span><span> </span></p></li><li><p><span><span>Automating data pipeline generation</span></span><span> </span></p></li><li><p><span><span>Accelerating <span style="overflow-wrap: break-word; display: inline; text-decoration: inherit; hyphens: auto;">ingestion-to-consumption</span> workflows</span></span><span> </span></p></li><li><p><span><span>Automating QA from ingestion to consumption</span></span><span> </span></p></li><li><p><span><span>Automation DBT model generation</span></span><span> </span></p></li><li><p><span><span>Improving testing, documentation, and lineage tracking</span></span><span> </span></p></li><li><p><span><span>Experience building or<span> </span></span><span>leveraging</span><span><span> </span></span></span><b><span>metadata-driven or declarative pipelines</span></b><span> </span></p></li><li><p><span><span>Familiarity with<span> </span></span></span><b><span>self-service BI tools</span></b><span><span><span> </span>(e.g., ThoughtSpot</span><span>)</span></span><span> </span></p></li><li><p><span><span>Knowledge of<span> </span></span></span><b><span>data governance, cataloging, and lineage systems</span></b><span> </span></p></li><li><p><span><span>Experience in<span> </span></span></span><b><span>SaaS or multi-product ecosystems</span></b><span> </span></p></li><li><p><span><span>Understanding of<span> </span></span></span><b><span>privacy, compliance, and secure data access patterns</span></b><span> </span></p></li></ul></div><div><p><span> </span></p></div><div><p><b>Where</b></p></div></div><div><div><p>This is a fully remote, US based position with minimal to no travel required. You may be asked to travel up to 4 times per year for quarterly planning. The EverCommerce team is distributed globally, with teams in the U.S., Canada, the U.K., Jordan, New Zealand, and Australia. With a widely distributed team, we are used to working remotely across different time zones. This role can be based anywhere in the United States or Canada – if you’re close to one of our offices, we can set you up in-office or you can work 100% remotely. Please note that you must be eligible to work without sponsorship to qualify for this position, and this role may require travel to our Corporate Headquarters in Denver, Colorado, or to other office locations around North America.</p><p></p><p><b>Benefits and Perks</b></p><ul><li><p>Flexibility to work where/how you want within your country of employment – in-office, remote, or hybrid</p></li><li><p>Continued investment in your professional development </p></li><li><p>Day 1 access to a robust health and wellness benefits package, including an annual wellness stipend. </p></li><li><p>401k with up to a 4% match and immediate vesting </p></li><li><p>Flexible and generous (FTO) time-off </p></li><li><p>Employee Stock Purchase <span style="overflow-wrap: break-word; display: inline; text-decoration: inherit; hyphens: auto;">Program.  ​</span></p></li></ul><p></p><p><b>Compensation: </b>The target base compensation for this position is $130,000 to $150,000 USD per year in most US locations. Final offer amounts are determined by multiple factors including location, local market variances, and candidate experience and expertise, and may vary from the amounts listed above.</p></div></div><p></p><div><div><div><div><div><div><div><p><span>EverCommerce is an equal opportunity employer and we value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender identity, sexual orientation, age, marital status, veteran status, or disability status. We look forward to reviewing your credentials and getting to know more about your experience!</span></p></div></div></div></div></div></div></div>

Back to blog