We are looking for talented and driven Data Engineers at various levels to work with customers and data scientists to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Required Qualifications :
- 3-5 years of experience of developing and managing streaming and batch data pipelines
- Experience in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning
- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop
- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)
- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc
- Familiarity with cloud-based platforms like AWS, Azure or GCP
- Experience with cloud data warehouses like Redshift and Snowflake
- Proficient in writing complex SQL queries.
- Experience working with structured and semi-structured data formats like CSV, JSON and XML
- Desire to learn about, explore and invent new tools for solving real-world problems using data
Desired Qualifications :
- Cloud computing experience, Amazon Web Services (AWS)
- Prior experience in Data Warehousing concepts, multi-dimensional data models
- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards
About Hypersonix Inc
What we ask
- 1+ years of Data Engineering Experience - Design, develop, deliver and maintain data infrastructures.
- SQL Specialist – Strong knowledge and Seasoned experience with SQL Queries (strong in outer joins, aggregations, unions, window functions & CTE’s)
- Languages: Python
- Good communicator, shows initiative, works well with stakeholders.
- Experience working closely with Data Analysts and provide the data they need and guide them on the issues.
- Solid ETL experience and Hadoop/Hive/Pyspark/Presto/ SparkSQL
- Solid communication and articulation skills
- Able to handle stakeholders independently with less interventions of reporting manager.
- Develop strategies to solve problems in logical yet creative ways.
- Create custom reports and presentations accompanied by strong data visualization and storytelling
THE ROLE:Sr. Cloud Data Infrastructure Engineer
As a Sr. Cloud Data Infrastructure Engineer with Intuitive, you will be responsible for building or converting legacy data pipelines from legacy environments to modern cloud environments to help the analytics and data science initiatives across our enterprise customers. You will be working closely with SMEs in Data Engineering and Cloud Engineering, to create solutions and extend Intuitive's DataOps Engineering Projects and Initiatives. The Sr. Cloud Data Infrastructure Engineer will be a central critical role for establishing the DataOps/DataX data logistics and management for building data pipelines, enforcing best practices, ownership for building complex and performant Data Lake Environments, work closely with Cloud Infrastructure Architects and DevSecOps automation teams. The Sr. Cloud Data Infrastructure Engineer is the main point of contact for all things related to DataLake formation and data at scale. In this role, we expect our DataOps leaders to be obsessed with data and providing insights to help our end customers.
ROLES & RESPONSIBILITIES:
- Design, develop, implement, and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low-latency, and fault-tolerance in every system built
- Developing scalable and re-usable frameworks for ingesting large data from multiple sources.
- Modern Data Orchestration engineering - query tuning, performance tuning, troubleshooting, and debugging big data solutions.
- Provides technical leadership, fosters a team environment, and provides mentorship and feedback to technical resources.
- Deep understanding of ETL/ELT design methodologies, patterns, personas, strategy, and tactics for complex data transformations.
- Data processing/transformation using various technologies such as spark and cloud Services.
- Understand current data engineering pipelines using legacy SAS tools and convert to modern pipelines.
Data Infrastructure Engineer Strategy Objectives: End to End Strategy
Define how data is acquired, stored, processed, distributed, and consumed.
Collaboration and Shared responsibility across disciplines as partners in delivery for progressing our maturity model in the End-to-End Data practice.
- Understanding and experience with modern cloud data orchestration and engineering for one or more of the following cloud providers - AWS, Azure, GCP.
- Leading multiple engagements to design and develop data logistic patterns to support data solutions using data modeling techniques (such as file based, normalized or denormalized, star schemas, schema on read, Vault data model, graphs) for mixed workloads, such as OLTP, OLAP, streaming using any formats (structured, semi-structured, unstructured).
- Applying leadership and proven experience with architecting and designing data implementation patterns and engineered solutions using native cloud capabilities that span data ingestion & integration (ingress and egress), data storage (raw & cleansed), data prep & processing, master & reference data management, data virtualization & semantic layer, data consumption & visualization.
- Implementing cloud data solutions in the context of business applications, cost optimization, client's strategic needs and future growth goals as it relates to becoming a 'data driven' organization.
- Applying and creating leading practices that support high availability, scalable, process and storage intensive solutions architectures to data integration/migration, analytics and insights, AI, and ML requirements.
- Applying leadership and review to create high quality detailed documentation related to cloud data Engineering.
- Implementing cloud data orchestration and data integration patterns (AWS Glue, Azure Data Factory, Event Hub, Databricks, etc.), storage and processing (Redshift, Azure Synapse, BigQuery, Snowflake)
- Possessing a certification(s) in one of the following is a big plus - AWS/Azure/GCP data engineering, and Migration.
- 10+ years’ experience as data engineer.
- Must have 5+ Years in implementing data engineering solutions with multiple cloud providers and toolsets.
- This is hands on role building data pipelines using Cloud Native and Partner Solutions. Hands-on technical experience with Data at Scale.
- Must have deep expertise in one of the programming languages for data processes (Python, Scala). Experience with Python, PySpark, Hadoop, Hive and/or Spark to write data pipelines and data processing layers.
- Must have worked with multiple database technologies and patterns. Good SQL experience for writing complex SQL transformation.
- Performance Tuning of Spark SQL running on S3/Data Lake/Delta Lake/ storage and Strong Knowledge on Databricks and Cluster Configurations.
- Nice to have Databricks administration including security and infrastructure features of Databricks.
- Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration
We are an emerging Artificial Intelligence-based startup trying to cater to the need of industries that employ cutting-edge technologies for their operations. Currently, we are into and provide services to disruptive sectors such as drone tech, video surveillance, human-computer interaction, etc. In general, we believe that AI has the ability to shape the future of humanity and we aim to work towards spearheading this transition.
About the role:
We are looking for a highly motivated data engineer with a strong algorithmic mindset and problem-solving propensity.
Since we are operating in a highly competitive market, every opportunity to increase efficiency and cut costs is critical and the candidate should have an eye for such opportunities. We are constantly innovating – working on novel hardware and software – so a high level of flexibility and celerity in learning is expected.
- Analyzing and organizing raw data
- Building data systems and pipelines
- Evaluating business needs and objectives
- Interpreting trends and patterns
- Preparing data for prescriptive and predictive modeling
- Building algorithms and prototypes.
- Collaborating with data scientists and architects on projects
- Explore new promising technology and implement it to create awesome stuff.
- Must have:
- Good to have:
- ETL tool experience
- Data warehousing solutions experience
What’s in it for you:
- Opportunity to work on a lot of new cutting-edge technologies: We promise you rapid growth on your skillset by providing you with a steep learning curve.
- Opportunity to work closely with our experienced founding members who are experts in developing scalable practical AI products and software architecture development.
This role comes with a market competitive salary – which will remain aligned to your performance, degree of professionalism, culture fit and alignment with Aidetic’s long-term business strategy.
• Engage directly with the client to understand marketing objectives
• Develop custom performance reporting and analyses across multiple channels(Search, Display, Social,
• Architect solutions & provide recommendations that drive results on client campaigns & objectives
• Support A/B testing build work using customer experience optimization tools
• Collaborate closely with other teams to formulate industry-best-practice analytic solutions and
directly contribute to a variety of validation activities such as model design, execution and
assessment; data review; and campaign/marketing performance evaluation.
• Responsible for assisting in defining a comprehensive measurement framework, developing effective
reporting and dashboards with an objective to evaluate marketing performance and provide
recommendations for enhancement
• Manage the day to day core insights/trends of our digital marketing programs to help guide
• Work with various internal and externalstakeholders to develop project plan, manage the day-to-day
tasks and meet project deadlines
• 2 - 5 years of industry experience required
• Bachelors or Masters degree in Mathematics, Statistics, Economics, Finance, or Engineering required
• Strong experience with SQL and Python required
• Demonstrated proficiency in multiple digital marketing channels (Paid Search or SEM, Organic Search
or SEO, Paid Social, Earned Social, Display, Email) required
• Media Platforms and 3rd Party Tools experience (Google Adwords, DoubleClick, MediaMath, Bing Ads,
ExactTarget, Hitwise, BrightEdge, Facebook, etc.) useful
• Strong experience in Web Analytics tools (Adobe, Omniture, WebTrends, Google Analytics, AdWords,
AdCenter, DoubleClick, MediaMath, Exact Target, etc.) preferred
• Understanding of relational databases and familiarity with data processing
• Excellent written and oral presentation skills
• Strong problem solving and consulting skills
Job Summary :
Independently handle the delivery of analytics assignments by mentoring a team of 3 - 10 people and delivering to exceed client expectations
- Co-ordinate with onsite company consultants to ensure high quality, on-time delivery
- Take responsibility for technical skill-building within the organization (training, process definition, research of new tools and techniques etc.)
- Take part in organizational development activities to take company to the next level
Qualification, Skills & Prior Work Experience :
- Great analytical skills, detail-oriented approach
- Sound knowledge in MS Office tools like Excel, Power Point and data visualization tools like Tableau, PowerBI or such tools
- Strong experience in SQL, Python, SAS, SPSS, Statistica, R, MATLAB or such tools would be preferable
- Ability to adapt and thrive in the fast-paced environment that young companies operate in
- Priority for people with analytics work experience
- Programming skills- Java/Python/SQL/OOPS based programming knowledge
Job Location : Chennai, Work from Home will be provided until COVID situation improves
- Minimum one year experience needed
- Only 2019, 2020 and 2020 passed outs applicable
- Only above 70% aggregate throughout studies is applicable
- POST GRADUATION is must
Are you passionate about handling large & complex data problems, want to make an impact and have the desire to work on ground-breaking big data technologies? Then we are looking for you.
At Amagi, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day-to-day basis? If so, Amagi’s Data Engineering and Business Intelligence team is looking for passionate, detail-oriented, technical savvy, energetic team members who like to think outside the box.
Amagi’s Data warehouse team deals with petabytes of data catering to a wide variety of real-time, near real-time and batch analytical solutions. These solutions are an integral part of business functions such as Sales/Revenue, Operations, Finance, Marketing and Engineering, enabling critical business decisions. Designing, developing, scaling and running these big data technologies using native technologies of AWS and GCP are a core part of our daily job.
- Experience in building highly cost optimised data analytics solutions
- Experience in designing and building dimensional data models to improve accessibility, efficiency and quality of data
- Experience (hands on) in building high quality ETL applications, data pipelines and analytics solutions ensuring data privacy and regulatory compliance.
- Experience in working with AWS or GCP
- Experience with relational and NoSQL databases
- Experience to full stack web development (Preferably Python)
- Expertise with data visualisation systems such as Tableau and Quick Sight
- Proficiency in writing advanced SQL queries with expertise in performance tuning handling large data volumes
- Familiarity with ML/AÍ technologies is a plus
- Demonstrate strong understanding of development processes and agile methodologies
- Strong analytical and communication skills. Should be self-driven, highly motivated and ability to learn quickly
Data Analytics is at the core of our work, and you will have the opportunity to:
- Design Data-warehousing solutions on Amazon S3 with Athena, Redshift, GCP Bigtable etc
- Lead quick prototypes by integrating data from multiple sources
- Do advanced Business Analytics through ad-hoc SQL queries
- Work on Sales Finance reporting solutions using tableau, HTML5, React applications
We build amazing experiences and create depth in knowledge for our internal teams and our leadership. Our team is a friendly bunch of people that help each other grow and have a passion for technology, R&D, modern tools and data science.
Our work relies on deep understanding of the company needs and an ability to go through vast amounts of internal data such as sales, KPIs, forecasts, Inventory etc. One of the key expectations of this role would be to do data analytics, building data lakes, end to end reporting solutions etc. If you have a passion for cost optimised analytics and data engineering and are eager to learn advanced data analytics at a large scale, this might just be the job for you..
Education & Experience
A bachelor’s/master’s degree in Computer Science with 5 to 7 years of experience and previous experience in data engineering is a plus.
As a Senior Engineer - Big Data Analytics, you will help the architectural design and development for Healthcare Platforms, Products, Services, and Tools to deliver the vision of the Company. You will significantly contribute to engineering, technology, and platform architecture. This will be done through innovation and collaboration with engineering teams and related business functions. This is a critical, highly visible role within the company that has the potential to drive significant business impact.
The scope of this role will include strong technical contribution in the development and delivery of Big Data Analytics Cloud Platform, Products and Services in collaboration with execution and strategic partners.
- Design & develop, operate, and drive scalable, resilient, and cloud native Big Data Analytics platform to address the business requirements
- Help drive technology transformation to achieve business transformation, through the creation of the Healthcare Analytics Data Cloud that will help Change establish a leadership position in healthcare data & analytics in the industry
- Help in successful implementation of Analytics as a Service
- Ensure Platforms and Services meet SLA requirements
- Be a significant contributor and partner in the development and execution of the Enterprise Technology Strategy
- At least 2 years of experience software development for big data analytics, and cloud. At least 5 years of experience in software development
- Experience working with High Performance Distributed Computing Systems in public and private cloud environments
- Understands big data open-source eco-systems and its players. Contribution to open source is a strong plus
- Experience with Spark, Spark Streaming, Hadoop, AWS/Azure, NoSQL Databases, In-Memory caches, distributed computing, Kafka, OLAP stores, etc.
- Have successful track record of creating working Big Data stack that aligned with business needs, and delivered timely enterprise class products
- Experience with delivering and managing scale of Operating Environment
- Experience with Big Data/Micro Service based Systems, SaaS, PaaS, and Architectures
- Experience Developing Systems in Java, Python, Unix
- BSCS, BSEE or equivalent, MSCS preferred
Graphene is a Singapore Head quartered AI company which has been recognized as Singapore’s Best
Start Up By Switzerland’s Seedstarsworld, and also been awarded as best AI platform for healthcare in Vivatech Paris. Graphene India is also a member of the exclusive NASSCOM Deeptech club. We are developing an AI plaform which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.
Graphene was founded by Corporate leaders from Microsoft and P&G, and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.
Graphene’s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.
Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.
Job title: - Data Analyst
Data Analyst responsible for storage, data enrichment, data transformation, data gathering based on data requests, testing and maintaining data pipelines.
Responsibilities and Duties
- Managing end to end data pipeline from data source to visualization layer
- Ensure data integrity; Ability to pre-empt data errors
- Organized managing and storage of data
- Provide quality assurance of data, working with quality assurance analysts if necessary.
- Commissioning and decommissioning of data sets.
- Processing confidential data and information according to guidelines.
- Helping develop reports and analysis.
- Troubleshooting the reporting database environment and reports.
- Managing and designing the reporting environment, including data sources, security, and metadata.
- Supporting the data warehouse in identifying and revising reporting requirements.
- Supporting initiatives for data integrity and normalization.
- Evaluating changes and updates to source production systems.
- Training end-users on new reports and dashboards.
- Initiate data gathering based on data requirements
- Analyse the raw data to check if the requirement is satisfied
Qualifications and Skills
- Technologies required: Python, SQL/ No-SQL database(CosmosDB)
- Experience required 2 – 5 Years. Experience in Data Analysis using Python
• Understanding of software development life cycle
- Plan, coordinate, develop, test and support data pipelines, document, support for reporting dashboards (PowerBI)
- Automation steps needed to transform and enrich data.
- Communicate issues, risks, and concerns proactively to management. Document the process thoroughly to allow peers to assist with support as needed.
- Excellent verbal and written communication skills
The Data Engineer would be responsible for selecting and integrating Big Data tools and frameworks required. Would implement Data Ingestion & ETL/ELT processes
Required Experience, Skills and Qualifications:
- Hands on experience on Big Data tools/technologies like Spark, Databricks, Map Reduce, Hive, HDFS.
- Expertise and excellent understanding of big data toolset such as Sqoop, Spark-streaming, Kafka, NiFi
- Proficiency in any of the programming language: Python/ Scala/ Java with 4+ years’ experience
- Experience in Cloud infrastructures like MS Azure, Data lake etc
- Good working knowledge in NoSQL DB (Mongo, HBase, Casandra)
• Responsible for developing and maintaining applications with PySpark
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good customer communication.
• Good Analytical skills