About Innoplexus Consulting Services
Similar jobs
The Client is the world’s largest media investment company. Our team of experts support clients in programmatic, social, paid search, analytics, technology, organic search, affiliate marketing, e-commerce and across traditional channel We are currently looking for a Manager Analyst – Analytics to join us. In this role, you will work on
various projects for the in-house team across data management, reporting, and analytics.
Responsibility:
• Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics
• Develop data extraction and manipulation code based on business rules
• Design and construct data store and procedures for their maintenance Develop and maintain strong relationships with stakeholders Write high-quality code as per prescribed standards.
• Participate in internal projects as required
Requirements:
• 2-5 years for strong experience in working with SQL, Python, ETL development.
• Strong Experience in writing complex SQLs
• Good Communication skills
• Good experience of working with any BI tool like Tableau, Power BI.
• Familiar with various cloud technologies and their offerings within the data specialization and Data Warehousing.
• Snowflake, AWS are good to have.
Minimum qualifications:
• B. Tech./MCA or equivalent preferred
Excellent 2 years Hand on experience on Big data, ETL Development, Data Processing.
RESPONSIBILITIES:
Requirement understanding and elicitation, analyze, data/workflows, contribute to product
project and Proof of concept (POC)
Contribute to prepare design documents and effort estimations.
Develop AI/ML Models using best in-class ML models.
Building, testing, and deploying AI/ML solutions.
Work with Business Analysts and Product Managers to assist with defining functional user
stories.
Ensure deliverables across teams are of high quality and clearly documented.
Recommend best ML practices/Industry standards for any ML use case.
Proactively take up R and D and recommend solution options for any ML use case.
REQUIREMENTS:
Required Skills
Overall experience of 4 to 7 Years working on AI/ML framework development
Good programming knowledge in Python is must.
Good Knowledge of R and SAS is desired.
Good hands on and working knowledge SQL, Data Model, CRISP-DM.
Proficiency with Uni/multivariate statistics, algorithm design, and predictive AI/ML modelling.
Strong knowledge of machine learning algorithms, linear regression, logistic regression, KNN,
Random Forest, Support Vector Machines and Natural Language Processing.
Experience with NLP and deep neural networks using synthetic and artificial data.
Involved in different phases of SDLC and have good working exposure on different SLDC’s like
Agile Methodologies.
Looking for freelance?
We are seeking a freelance Data Engineer with 7+ years of experience
Skills Required: Deep knowledge in any cloud (AWS, Azure , Google cloud), Data bricks, Data lakes, Data Ware housing Python/Scala , SQL, BI, and other analytics systems
What we are looking for
We are seeking an experienced Senior Data Engineer with experience in architecture, design, and development of highly scalable data integration and data engineering processes
- The Senior Consultant must have a strong understanding and experience with data & analytics solution architecture, including data warehousing, data lakes, ETL/ELT workload patterns, and related BI & analytics systems
- Strong in scripting languages like Python, Scala
- 5+ years of hands-on experience with one or more of these data integration/ETL tools.
- Experience building on-prem data warehousing solutions.
- Experience with designing and developing ETLs, Data Marts, Star Schema
- Designing a data warehouse solution using Synapse or Azure SQL DB
- Experience building pipelines using Synapse or Azure Data Factory to ingest data from various sources
- Understanding of integration run times available in Azure.
- Advanced working SQL knowledge and experience working with relational databases, and queries. authoring (SQL) as well as working familiarity with a variety of database
Job Location: Chennai
Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery
•
Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Experience Range |
2 Years - 10 Years |
Function | Information Technology |
Desired Skills |
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
|
Education Type | Engineering |
Degree / Diploma | Bachelor of Engineering, Bachelor of Computer Applications, Any Engineering |
Specialization / Subject | Any Specialisation |
Job Type | Full Time |
Job ID | 000018 |
Department | Software Development |
In 2020, Renew Power, India’s largest renewables developer, acquired Climate Connect. Following ReNew’s listing on NASDAQ in summer 2021, Climate Connect has become the technology anchor of a new fully independent subsidiary - Climate Connect Digital. With backing from ReNew as the anchor investor to pursue an ambitious and visionary new strategy for rapid organic and inorganic growth.
Our mission has technology at its core and involves unlocking value through intelligent software, digitalisation, and ‘horizontal integration’ across the energy ecosystem. However, computational power and machine learning in the energy sector have yet to be fully leveraged and can create massive value.
We are looking for people with knowledge of:
● Excellent verbal communications, including the ability to clearly and concisely articulate complex concepts to both technical and non-technical collaborators
● Demonstrated history of knowledge in Computer Science, Statistics, Mathematics, Software Engineering or related technical fields
● Industry experience with proven ability to apply scientific methods to solve real-world problems on large scale data
● Extensive experience with Python and SQL for software development, data analysis, and machine learning
● Experience on Libraries: TensorFlow, Keras, Numpy, sklearn, pandas, scikit-image, matplotlib, Jupyter, Statsmodels
● Experience on Time Series analysis, including EDA, Statistical inferences, ARIMA, GARCH
● Knowledge of Cluster Analysis, Classification Trees, Discriminant Analysis, Neural Networks, Deep Learning, Logistic Regression, Associations Analysis
● Hands-on experience in implementing Deep learning models with video and time series data (CNN, LSTM- s, Aotoencoder, RBM)
● Experience of Regression, Multicriteria Decision Making, Descriptive Statistics, Hypothesis Testing, Segmentation/ Classification, Predictive Analytics
● Aptitude and experience in applied statistics and machine learning techniques
● Firm grasp of visualization tools interactive and self-serving such as business intelligence and notebooks
● Experience launching production-quality machine learning models at scale e.g. dataset construction, preprocessing, deployment, monitoring, quality assurance
● Experience with math programming is an added advantage. For example: optimization, computational geometry, numerical linear algebra, etc.
What you’ll work on:
We are developing a marketing automation platform through which an electricity retailer may apply a suite of proprietary ML algorithms to optimize outcomes across a range of channels and touchpoints. We require the services of a data science professional who can design and implement various AI/ML models that optimize the performance, quality, and reliability of the product. This position offers a potential pathway to leading an entire ML expert team. These are a few things you can look forward to working on:
● Translating high-level problems and key objectives into granular model requirements.
● Defining acceptance criteria that are well structured, detailed, and comprehensive.
● Developing and testing algorithms using our price forecasts, and customers' energy portfolio.
● Collaborating with the software engineering team in deploying the developed models tailored to specific customer needs.
● Participating in the software development process, and doing the required testing, and debugging to support the deployed models.
● Taking responsibility for ensuring tracking of appropriate events/metrics, so that monitoring is timely and rigorous.
● Driving the response to the discovery of regressions or failures, by undertaking various exercises (e.g. debugging, RCA, etc.) as needed
Experience:
● 6-11 years of experience in the field of Data Sciences or Machine Learning Qualifications:
● B.E / B. Tech / M. Tech / PhD in CS/IT or Data Sciences
What’s in it for you
We offer competitive salaries based on prevailing market rates. In addition to your introductory package, you can expect to receive the following benefits:
Flexible working hours
Unlimited annual leaves
Learning and development budget
Medical insurance/Term insurance, Gratuity benefits over and above the salaries
Access to industry and domain thought leaders
At Climate Connect Digital, you get a rare opportunity to join an established company at the early stages of a significant and well-backed global growth push.
Link to apply - https://climateconnect.digital/careers/?jobId=gaG9dgeTYBvF
- Extract and present valuable information from data
- Understand business requirements and generate insights
- Build mathematical models, validate and work with them
- Explain complex topics tailored to the audience
- Validate and follow up on results
- Work with large and complex data sets
- Establish priorities with clear goals and responsibilities to achieve a high level of performance.
- Work in an agile and iterative manner on solving problems
- Evaluate different options proactively and the ability to solve problems in an innovative way. Develop new solutions or combine existing methods to create new approaches.
- Good understanding of Digital & analytics
- Strong communication skills, orally and in writing
Job Overview:
As a Data Scientist, you will work in collaboration with our business and engineering people, on creating value from data. Often the work requires solving complex problems by turning vast amounts of data into business insights through advanced analytics, modeling, and machine learning. You have a strong foundation in analytics, mathematical modeling, computer science, and math - coupled with a strong business sense. You proactively fetch information from various sources and analyze it for better understanding of how the business performs. Furthermore, you model and build AI tools that automate certain processes within the company. The solutions produced will be implemented to impact business results.
Primary Responsibilities:
- Develop an understanding of business obstacles, create solutions based on advanced analytics and draw implications for model development
- Combine, explore, and draw insights from data. Often large and complex data assets from different parts of the business.
- Design and build explorative, predictive- or prescriptive models, utilizing optimization, simulation, and machine learning techniques
- Prototype and pilot new solutions and be a part of the aim of ‘productizing’ those valuable solutions that can have an impact at a global scale
- Guides and coaches other chapter colleagues to help solve data/technical problems at an operational level, and in methodologies to help improve development processes
- Identifies and interprets trends and patterns in complex data sets to enable the business to make data-driven decisions
Roles & Responsibilities
- Designing and delivering a best-in-class, highly scalable data governance platform
- Improving processes and applying best practices
- Contribute in all scrum ceremonies; assuming the role of ‘scum master’ on a rotational basis
- Development, management and operation of our infrastructure to ensure it is easy to deploy, scalable, secure and fault-tolerant
- Flexible on working hours as per business needs