About Artivatic
Similar jobs
● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results
Requirements
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
Kafka, Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
- Own the product analytics of bidgely’s end user-facing products, measure and identify areas of improvement through data
- Liaise with Product Managers and Business Leaders to understand the product issues, priorities and hence support them through relevant product analytics
- Own the automation of product analytics through good SQL knowledge
- Develop early warning metrics for production and highlight issues and breakdowns for resolution
- Resolve client escalations and concerns regarding key business metrics
- Define and own execution
- Own the Energy Efficiency program designs, dashboard development, and monitoring of existing Energy efficiency program
- Deliver data-backed analysis and statistically proven solutions
- Research and implement best practices
- Mentor team of analysts
Qualifications and Education Requirements
- B.Tech from a premier institute with 5+ years analytics experience or Full-time MBA from a premier b-school with 3+ years of experience in analytics/business or product analytics
- Bachelor's degree in Business, Computer Science, Computer Information Systems, Engineering, Mathematics, or other business/analytical disciplines
Skills needed to excel
- Proven analytical and quantitative skills and an ability to use data and metrics to back up assumptions, develop business cases, and complete root cause
analyses - Excellent understanding of retention, churn, and acquisition of user base
- Ability to employ statistics and anomaly detection techniques for data-driven
analytics - Ability to put yourself in the shoes of the end customer and understand what
“product excellence” means - Ability to rethink existing products and use analytics to identify new features and product improvements.
- Ability to rethink existing processes and design new processes for more effective analyses
- Strong SQL knowledge, working experience with Looker and Tableau a great plus
- Strong commitment to quality visible in the thoroughness of analysis and techniques employed
- Strong project management and leadership skills
- Excellent communication (oral and written) and interpersonal skills and an ability to effectively communicate with both business and technical teams
- Ability to coach and mentor analysts on technical and analytical skills
- Good knowledge of statistics, basic machine learning, and AB Testing is
preferable - Experience as a Growth hacker and/or in Product analytics is a big plus
About Kloud9:
Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.
Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.
At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.
Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.
We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.
Responsibilities:
● Studying, transforming, and converting data science prototypes
● Deploying models to production
● Training and retraining models as needed
● Analyzing the ML algorithms that could be used to solve a given problem and ranking them by their respective scores
● Analyzing the errors of the model and designing strategies to overcome them
● Identifying differences in data distribution that could affect model performance in real-world situations
● Performing statistical analysis and using results to improve models
● Supervising the data acquisition process if more data is needed
● Defining data augmentation pipelines
● Defining the pre-processing or feature engineering to be done on a given dataset
● To extend and enrich existing ML frameworks and libraries
● Understanding when the findings can be applied to business decisions
● Documenting machine learning processes
Basic requirements:
● 4+ years of IT experience in which at least 2+ years of relevant experience primarily in converting data science prototypes and deploying models to production
● Proficiency with Python and machine learning libraries such as scikit-learn, matplotlib, seaborn and pandas
● Knowledge of Big Data frameworks like Hadoop, Spark, Pig, Hive, Flume, etc
● Experience in working with ML frameworks like TensorFlow, Keras, OpenCV
● Strong written and verbal communications
● Excellent interpersonal and collaboration skills.
● Expertise in visualizing and manipulating big datasets
● Familiarity with Linux
● Ability to select hardware to run an ML model with the required latency
● Robust data modelling and data architecture skills.
● Advanced degree in Computer Science/Math/Statistics or a related discipline.
● Advanced Math and Statistics skills (linear algebra, calculus, Bayesian statistics, mean, median, variance, etc.)
Nice to have
● Familiarity with Java, and R code writing.
● Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world
● Verifying data quality, and/or ensuring it via data cleaning
● Supervising the data acquisition process if more data is needed
● Finding available datasets online that could be used for training
Why Explore a Career at Kloud9:
With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.
Title: Data Engineer – Snowflake
Location: Mysore (Hybrid model)
Exp-2-8 yrs
Type: Full Time
Walk-in date: 25th Jan 2023 @Mysore
Job Role: We are looking for an experienced Snowflake developer to join our team as a Data Engineer who will work as part of a team to help design and develop data-driven solutions that deliver insights to the business. The ideal candidate is a data pipeline builder and data wrangler who enjoys building data-driven systems that drive analytical solutions and building them from the ground up. You will be responsible for building and optimizing our data as well as building automated processes for production jobs. You will support our software developers, database architects, data analysts and data scientists on data initiatives
Key Roles & Responsibilities:
- Use advanced complex Snowflake/Python and SQL to extract data from source systems for ingestion into a data pipeline.
- Design, develop and deploy scalable and efficient data pipelines.
- Analyze and assemble large, complex datasets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements. For example: automating manual processes, optimizing data delivery, re-designing data platform infrastructure for greater scalability.
- Build required infrastructure for optimal extraction, loading, and transformation (ELT) of data from various data sources using AWS and Snowflake leveraging Python or SQL technologies.
- Monitor cloud-based systems and components for availability, performance, reliability, security and efficiency
- Create and configure appropriate cloud resources to meet the needs of the end users.
- As needed, document topology, processes, and solution architecture.
- Share your passion for staying on top of tech trends, experimenting with and learning new technologies
Qualifications & Experience
Qualification & Experience Requirements:
- Bachelor's degree in computer science, computer engineering, or a related field.
- 2-8 years of experience working with Snowflake
- 2+ years of experience with the AWS services.
- Candidate should able to write the stored procedure and function in Snowflake.
- At least 2 years’ experience in snowflake developer.
- Strong SQL Knowledge.
- Data injection in snowflake using Snowflake procedure.
- ETL Experience is Must (Could be any tool)
- Candidate should be aware of snowflake architecture.
- Worked on the Migration project
- DW Concept (Optional)
- Experience with cloud data storage and compute components including lambda functions, EC2s, containers.
- Experience with data pipeline and workflow management tools: Airflow, etc.
- Experience cleaning, testing, and evaluating data quality from a wide variety of ingestible data sources
- Experience working with Linux and UNIX environments.
- Experience with profiling data, with and without data definition documentation
- Familiar with Git
- Familiar with issue tracking systems like JIRA (Project Management Tool) or Trello.
- Experience working in an agile environment.
Desired Skills:
- Experience in Snowflake. Must be willing to be Snowflake certified in the first 3 months of employment.
- Experience with a stream-processing system: Snowpipe
- Working knowledge of AWS or Azure
- Experience in migrating from on-prem to cloud systems
In 2020, Renew Power, India’s largest renewables developer, acquired Climate Connect. Following ReNew’s listing on NASDAQ in summer 2021, Climate Connect has become the technology anchor of a new fully independent subsidiary - Climate Connect Digital. With backing from ReNew as the anchor investor to pursue an ambitious and visionary new strategy for rapid organic and inorganic growth.
Our mission has technology at its core and involves unlocking value through intelligent software, digitalisation, and ‘horizontal integration’ across the energy ecosystem. However, computational power and machine learning in the energy sector have yet to be fully leveraged and can create massive value.
We are looking for people with knowledge of:
● Excellent verbal communications, including the ability to clearly and concisely articulate complex concepts to both technical and non-technical collaborators
● Demonstrated history of knowledge in Computer Science, Statistics, Mathematics, Software Engineering or related technical fields
● Industry experience with proven ability to apply scientific methods to solve real-world problems on large scale data
● Extensive experience with Python and SQL for software development, data analysis, and machine learning
● Experience on Libraries: TensorFlow, Keras, Numpy, sklearn, pandas, scikit-image, matplotlib, Jupyter, Statsmodels
● Experience on Time Series analysis, including EDA, Statistical inferences, ARIMA, GARCH
● Knowledge of Cluster Analysis, Classification Trees, Discriminant Analysis, Neural Networks, Deep Learning, Logistic Regression, Associations Analysis
● Hands-on experience in implementing Deep learning models with video and time series data (CNN, LSTM- s, Aotoencoder, RBM)
● Experience of Regression, Multicriteria Decision Making, Descriptive Statistics, Hypothesis Testing, Segmentation/ Classification, Predictive Analytics
● Aptitude and experience in applied statistics and machine learning techniques
● Firm grasp of visualization tools interactive and self-serving such as business intelligence and notebooks
● Experience launching production-quality machine learning models at scale e.g. dataset construction, preprocessing, deployment, monitoring, quality assurance
● Experience with math programming is an added advantage. For example: optimization, computational geometry, numerical linear algebra, etc.
What you’ll work on:
We are developing a marketing automation platform through which an electricity retailer may apply a suite of proprietary ML algorithms to optimize outcomes across a range of channels and touchpoints. We require the services of a data science professional who can design and implement various AI/ML models that optimize the performance, quality, and reliability of the product. This position offers a potential pathway to leading an entire ML expert team. These are a few things you can look forward to working on:
● Translating high-level problems and key objectives into granular model requirements.
● Defining acceptance criteria that are well structured, detailed, and comprehensive.
● Developing and testing algorithms using our price forecasts, and customers' energy portfolio.
● Collaborating with the software engineering team in deploying the developed models tailored to specific customer needs.
● Participating in the software development process, and doing the required testing, and debugging to support the deployed models.
● Taking responsibility for ensuring tracking of appropriate events/metrics, so that monitoring is timely and rigorous.
● Driving the response to the discovery of regressions or failures, by undertaking various exercises (e.g. debugging, RCA, etc.) as needed
Experience:
● 6-11 years of experience in the field of Data Sciences or Machine Learning Qualifications:
● B.E / B. Tech / M. Tech / PhD in CS/IT or Data Sciences
What’s in it for you
We offer competitive salaries based on prevailing market rates. In addition to your introductory package, you can expect to receive the following benefits:
Flexible working hours
Unlimited annual leaves
Learning and development budget
Medical insurance/Term insurance, Gratuity benefits over and above the salaries
Access to industry and domain thought leaders
At Climate Connect Digital, you get a rare opportunity to join an established company at the early stages of a significant and well-backed global growth push.
Link to apply - https://climateconnect.digital/careers/?jobId=gaG9dgeTYBvF
Designation: Specialist - Cloud Service Developer (ABL_SS_600)
Position description:
- The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc.
- Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
- Monitor & Optimize the performance using AWS dashboards and logs
- Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements
- Work with the cloud team in agile approach and develop cost optimized solutions
Primary Responsibilities:
- Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc.
Reporting Team
- Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414)
- Reporting Department: Application Development (2487)
Required Skills:
- AWS certification would be preferred
- Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration)
- Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services.
- Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration)
- Good in data structure, programming in (pyspark / python / golang / Scala)
We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification
- Python coding skills
- Scikit-learn, pandas, tensorflow/keras experience
- Machine learning: designing ml models and explaining them for regression, classification, dimensionality reduction, anomaly detection etc
- Implementing Machine learning models and pushing it to production
- Creating docker images for ML models, REST API creation in Python
- Additional Skills Compulsory:
- Knowledge and professional experience of text and NLP related projects such as - text classification, text summarization, topic modeling etc
- Additional Skills Compulsory:
- Knowledge and professional experience of vision and deep learning for documents - CNNs, Deep neural networks using tensorflow for Keras for object detection, OCR implementation, document extraction etc