[Job - 29253] Senior Data Developer (AWS), Brazil
Quick Summary
We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions. With over 8,000 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.
Data Pipeline Development: Develop jobs in AWS Glue/Spark or Python (Pandas/Lambdas) for transforming raw data into curated datasets.
Advanced English proficiency (minimum) for interaction with global teams and documentation. Strong experience with AWS ecosystem, particularly with S3 as the primary data source.
The successful candidate will contribute to building the technical foundation of our platform, enabling independent evolution, integration with multiple data sources, and accelerating new AI capabilities within the company. This position demands a blend of technical expertise, strategic thinking, and a hands-on approach to data development.
Responsibilities:
- Data Pipeline Development: Develop jobs in AWS Glue/Spark or Python (Pandas/Lambdas) for transforming raw data into curated datasets.
- Data Storage Structuring: Structure data in Amazon S3 (Data Lake) and integrate it with Vector Databases or Redshift to support advanced analytics.
- Data Processing: Handle both structured and unstructured data, ensuring it is processed into optimized formats (e.g., Parquet) for efficient querying and analysis.
- Orchestration Configuration: Configure workflows using AWS Step Functions or Managed Workflows for Apache Airflow (MWAA) to automate data processing tasks.
- Collaboration with Cross-functional Teams: Work closely with data scientists, engineers, and business stakeholders to understand data needs and deliver high-quality solutions.
- Performance Monitoring: Monitor and analyze data pipeline performance, ensuring reliability and efficiency in data processing workflows.
- Governance and Scalability: Implement data governance practices and scalable architectures to support corporate data initiatives.
- AI Integration: Support the integration of AI capabilities into data workflows, enhancing the overall efficiency and effectiveness of our solutions.
- Advanced English proficiency (minimum) for interaction with global teams and documentation.
- Strong experience with AWS ecosystem, particularly with S3 as the primary data source.
- Proficiency in data processing frameworks such as AWS Glue, Spark, or Python (Pandas/Lambdas).
- Experience with Databricks and Databricks Connector, including API integrations for data processing and structuring.
- Expertise in handling structured and unstructured data, with knowledge of Vector Databases (e.g., Qdrant) for AI applications.
- Proven ability to work in a hands-on capacity, transforming diverse files into processable datasets.
- Familiarity with AI frameworks and methodologies.
- Experience with data orchestration tools like Apache Airflow.
- Knowledge of data governance and compliance standards.
- Understanding of performance optimization techniques for data pipelines.
Location & Eligibility
Listing Details
- Posted
- May 12, 2026
- First seen
- May 12, 2026
- Last seen
- May 12, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 68%
- Scored at
- May 12, 2026
Signal breakdown
Please let Ciandt know you found this job on Jobera.
3 other jobs at Ciandt
View all →Explore open roles at Ciandt.
Similar Data Developer jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.