As a Senior Engineer - Big Data Analytics, you will help the architectural design and development for Healthcare Platforms, Products, Services, and Tools to deliver the vision of the Company. You will significantly contribute to engineering, technology, and platform architecture. This will be done through innovation and collaboration with engineering teams and related business functions. This is a critical, highly visible role within the company that has the potential to drive significant business impact.
The scope of this role will include strong technical contribution in the development and delivery of Big Data Analytics Cloud Platform, Products and Services in collaboration with execution and strategic partners.
- Design & develop, operate, and drive scalable, resilient, and cloud native Big Data Analytics platform to address the business requirements
- Help drive technology transformation to achieve business transformation, through the creation of the Healthcare Analytics Data Cloud that will help Change establish a leadership position in healthcare data & analytics in the industry
- Help in successful implementation of Analytics as a Service
- Ensure Platforms and Services meet SLA requirements
- Be a significant contributor and partner in the development and execution of the Enterprise Technology Strategy
- At least 2 years of experience software development for big data analytics, and cloud. At least 5 years of experience in software development
- Experience working with High Performance Distributed Computing Systems in public and private cloud environments
- Understands big data open-source eco-systems and its players. Contribution to open source is a strong plus
- Experience with Spark, Spark Streaming, Hadoop, AWS/Azure, NoSQL Databases, In-Memory caches, distributed computing, Kafka, OLAP stores, etc.
- Have successful track record of creating working Big Data stack that aligned with business needs, and delivered timely enterprise class products
- Experience with delivering and managing scale of Operating Environment
- Experience with Big Data/Micro Service based Systems, SaaS, PaaS, and Architectures
- Experience Developing Systems in Java, Python, Unix
- BSCS, BSEE or equivalent, MSCS preferred
Roles and Responsibilities:
- Verify, review and rectify questions end-to-end in the creation cycle, this would be for
all difficulty levels and across multiple programming languages of coding questions.
- Review, validate and correct test cases that belong to a particular question. Make
- Document and report the quality parameters and suggest a continuous improvement.
- Help the team with writing or generating code stubs wherever necessary for a coding
question in one of the programming languages like C, C++, Java, and Python. (A code
stub is a partial code to help candidates start off with, it’s a starter code-snippet)
- Identify and rectify technical errors in coding questions and ensure that questions meet
- Working with Product Manager to research on latest technologies, trends, and
assessments in coding.
- Bring an innovative approach to the ever-changing world of programming languages and
framework-based technologies like ReactJS, Angular, Spring Boot, DOT NET.
- 0 -3 Years of experience in writing codes either in C, C++, C#, Java or Python
- Good to have: Knowledge of Manual QA and lifecycle of a QA
- Ability to understand algorithms and Data Structures.
- Candidates with exposure to ReactJS, Java Springboot, AI/ML will also be a good fit.
- Analytical and problem-solving skills by understanding complex problems.
- Experience on any competitive coding Platform is an added advantage.
- Passion about technology.
- Degree related to Computer Science: MCA, B.E., B.Tech, B.Sc
- The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action.
- Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.
- Assess the effectiveness and accuracy of new data sources and data gathering techniques.
- Develop custom data models and algorithms to apply to data sets.
- Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
- Develop company A/B testing framework and test model quality.
- Develop processes and tools to monitor and analyze model performance and data accuracy.
Roles & Responsibilities
- Experience using statistical languages (R, Python, SQL, etc.) to manipulate data and draw insights from large data sets.
- Experience working with and creating data architectures.
- Looking for someone with 3-7 years of experience manipulating data sets and building statistical models
- Has a Bachelor's, Master's in Computer Science or another quantitative field
- Knowledge and experience in statistical and data mining techniques :
- GLM/Regression, Random Forest, Boosting, Trees, text mining,social network analysis, etc.
- Experience querying databases and using statistical computer languages :R, Python, SQL, etc.
- Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees,neural networks, etc.
- Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc.
- Experience visualizing/presenting data for stakeholders using: Periscope, Business Objects, D3, ggplot, etc.
Novelship is seeking a Data Engineer to be based in India or Remote in South East Asia to join our Tech Team.
Brief Description of the Role:
As a Data Engineer, you will be responsible for Building & Maintaining our Analytics Infrastructure, Data Taxononmy, Data Ingestion and aggregation to provide Business Intelligence to different teams and support Data Dependent tools like ERP and CRM.
In this role you will:
- Analyze and design ETL solutions to store/fetch data from multiple systems like Postgres, Airtable, Google Analytics and Mixpanel.
- Drive the implementation of new data management projects such as Finance ERP and re-structure of the current data architecture.
- Participate in the building of a single source of Data Sytems and Data Taxonomy projects.
- Engage in problem definition and resolution and collaborate with a diverse group of engineers and business owners from across the company.
- Work with stakeholders including the Strategy, Product and Marketing teams to assist with data-related technical issues, support their data analytics needs and work on data collection and aggregation solutions.
- Act as a technical resource for the Data team and be involved in creating and implementing current and future Analytics projects like data lake design and data warehouse design.
- Ensure quality and consistency of the data in the Data warehouse and follow best data governance practices.
- Analyze large amounts of information to discover trends and patterns to provide Business Intelligence.
- Mine and analyse data from databases to drive optimization and improvement of product development, marketing techniques and business strategies.
- Design and build reusable components, frameworks and libraries at scale to support analytics data products
- Build and maintain optimal data pipeline architecture and data systems. Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
- Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems
- 2 to 4 years of professional experience as a Data Engineer.
- Proficiency in either Python, Scala or R.
- Proficiency in SQL, Relational & Non-Relational Databases.
- Excellent analytical and problem-solving skills.
- Experience with Business Intelligence tools like Data Studio, Power BI and Tableau.
- Experience in Data Cleaning, Creating Data Pipelines, Data Modelling, Storytelling and Dashboarding.
- Bachelors or Masters's education in Computer Science
● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
- 5+ years of experience in software development.
- At least 2 years of relevant work experience on large scale Data applications
- Good attitude, strong problem-solving abilities, analytical skills, ability to take ownership as appropriate
- Should be able to do coding, debugging, performance tuning, and deploying the apps to Prod.
- Should have good working experience Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)
- J2EE Frameworks (Spring/Hibernate/REST)
- Spark Streaming or any other streaming technology.
- Java programming language is mandatory.
- Good to have experience with Java
- Ability to work on the sprint stories to completion along with Unit test case coverage.
- Experience working in Agile Methodology
- Excellent communication and coordination skills
- Knowledgeable (and preferred hands-on) - UNIX environments, different continuous integration tools.
- Must be able to integrate quickly into the team and work independently towards team goals
- Take the complete responsibility of the sprint stories’ execution
- Be accountable for the delivery of the tasks in the defined timelines with good quality
- Follow the processes for project execution and delivery.
- Follow agile methodology
- Work with the team lead closely and contribute to the smooth delivery of the project.
- Understand/define the architecture and discuss the pros-cons of the same with the team
- Involve in the brainstorming sessions and suggest improvements in the architecture/design.
- Work with other team leads to get the architecture/design reviewed.
- Work with the clients and counterparts (in US) of the project.
- Keep all the stakeholders updated about the project/task status/risks/issues if there are any.
We are currently looking for a Junior Data Scientist to join our growing Data Science team in Panchkula. As a Jr. Data Scientist, you will work closely with the Head of Data Science and a variety of cross-functional teams to identify opportunities to enhance the customer journey, reduce churn, improve user retention, and drive revenue.
- Medium to Expert level proficiency in either R or Python.
- Expert level proficiency in SQL scripting for RDBMS and NoSQL DBs (especially MongoDB)
- Tracking and insights on key metrics around User Journey, User Retention, Churn Modelling and Prediction, etc.
- Medium-to-Highly skilled in data-structures and ML algorithms, with the ability to create efficient solutions to complex problems.
- Experience of working on an end-to-end data science pipeline: problem scoping, data gathering, EDA, modeling, insights, visualizations, monitoring and maintenance.
- Medium-to-Proficient in creating beautiful Tableau dashboards.
- Problem-solving: Ability to break the problem into small parts and apply relevant techniques to drive the required outcomes.
- Intermediate to advanced knowledge of machine learning, probability theory, statistics, and algorithms. You will be required to discuss and use various algorithms and approaches on a daily basis.
- Proficient in at least a few of the following: regression, Bayesian methods, tree-based learners, SVM, RF, XGBOOST, time series modelling, GLM, GLMM, clustering, Deep learning etc.
Good to Have
- Experience in one of the upcoming technologies like deep learning, recommender systems, etc.
- Experience of working in the Gaming domain
- Marketing analytics, cross-sell, up-sell, campaign analytics, fraud detection
- Experience in building and maintaining Data Warehouses in AWS would be a big plus!
- PF and gratuity
- Working 5 days a week
- Paid leaves (CL, SL, EL, ML) and holidays
- Parties, festivals, birthday celebrations, etc
- Equability: absence of favouritism in hiring & promotion
Do you have a passion for computer vision and deep learning problems? We are looking for someone who thrives on collaboration and wants to push the boundaries of what is possible today! Material Depot (materialdepot.in) is on a mission to be India’s largest tech company in the Architecture, Engineering and Construction space by democratizing the construction ecosystem and bringing stakeholders onto a common digital platform. Our engineering team is responsible for developing Computer Vision and Machine Learning tools to enable digitization across the construction ecosystem. The founding team includes people from top management consulting firms and top colleges in India (like BCG, IITB), and have worked extensively in the construction space globally and is funded by top Indian VCs.
Our team empowers Architectural and Design Businesses to effectively manage their day to day operations. We are seeking an experienced, talented Data Scientist to join our team. You’ll be bringing your talents and expertise to continue building and evolving our highly available and distributed platform.
Our solutions need complex problem solving in computer vision that require robust, efficient, well tested, and clean solutions. The ideal candidate will possess the self-motivation, curiosity, and initiative to achieve those goals. Analogously, the candidate is a lifelong learner who passionately seeks to improve themselves and the quality of their work. You will work together with similar minds in a unique team where your skills and expertise can be used to influence future user experiences that will be used by millions.
In this role, you will:
- Extensive knowledge in machine learning and deep learning techniques
- Solid background in image processing/computer vision
- Experience in building datasets for computer vision tasks
- Experience working with and creating data structures / architectures
- Proficiency in at least one major machine learning framework
- Experience visualizing data to stakeholders
- Ability to analyze and debug complex algorithms
- Good understanding and applied experience in classic 2D image processing and segmentation
- Robust semantic object detection under different lighting conditions
- Segmentation of non-rigid contours in challenging/low contrast scenarios
- Sub-pixel accurate refinement of contours and features
- Experience in image quality assessment
- Experience with in depth failure analysis of algorithms
- Highly skilled in at least one scripting language such as Python or Matlab and solid experience in C++
- Creativity and curiosity for solving highly complex problems
- Excellent communication and collaboration skills
- Mentor and support other technical team members in the organization
- Create, improve, and refine workflows and processes for delivering quality software on time and with carefully calculated debt
- Work closely with product managers, customer support representatives, and account executives to help the business move fast and efficiently through relentless automation.
How you will do this:
- You’re part of an agile, multidisciplinary team.
- You bring your own unique skill set to the table and collaborate with others to accomplish your team’s goals.
- You prioritize your work with the team and its product owner, weighing both the business and technical value of each task.
- You experiment, test, try, fail, and learn continuously.
- You don’t do things just because they were always done that way, you bring your experience and expertise with you and help the team make the best decisions.
For this role, you must have:
- Strong knowledge of and experience with the functional programming paradigm.
- Experience conducting code reviews, providing feedback to other engineers.
- Great communication skills and a proven ability to work as part of a tight-knit team.
- Sr. Data Engineer:
Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python
Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred
- Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
- Have good understanding on Foundry Platform landscape and it’s capabilities
- Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
- Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
- Designs data integrations and data quality framework.
- Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
- Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
- Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed
Desired Candidate Profile :
- Strong data engineering background
- Experience with Clinical Data Model is preferred
- Experience in
- SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
- Java and Groovy for our back-end applications and data integration tools
- Python for data processing and analysis
- Cloud infrastructure based on AWS EC2 and S3
- 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
- 5+ years of Python and Pyspark development experience
- Strong troubleshooting and problem solving skills
- BTech or master's degree in computer science or a related technical field
- Experience designing, building, and maintaining big data pipelines systems
- Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
- Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
- Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
- Hand-on experience in AWS / Azure cloud platform and stack
- Strong in API based architecture and concept, able to do quick PoC using API integration and development
- Knowledge of machine learning and AI
- Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.
Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision