About INSTAFUND INTERNET PRIVATE LIMITED
Similar jobs
We are looking out for a Snowflake developer for one of our premium clients for their PAN India loaction
Company Overview:
At Codvo, software and people transformations go hand-in-hand. We are a global empathy-led technology services company. Product innovation and mature software engineering are part of our core DNA. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day.
We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results.
Job Description :
- Candidate should have strong technical and analytical skill with more in SQL Server, reporting tools like Tableau, Power BI, SSRS and .Net.
- Candidate should have experience for proper understanding of the project deliverables.
- Candidate should be responsible for the respective tasks assigned in the project.
- Candidate will be responsible for the deliverable with proper quality, in planned time and cost adhering to the industry standards that will be defined for the project.
- Candidate should be involved in client interaction.
- Candidate should possess excellent communication skills.
Required Skills : BI Gateway, MS SQL Server, Tableau, Power BI,.Net , OLAP, UI/UX , Dashboard Building
Experience : 5+Years
Job Location : Remote/Saudi Arabia
Work Timings : 2.30 pm- 11.30 pm
- Design the architecture of our big data platform
- Perform and oversee tasks such as writing scripts, calling APIs, web scraping, and writing SQL queries
- Design and implement data stores that support the scalable processing and storage of our high-frequency data
- Maintain our data pipeline
- Customize and oversee integration tools, warehouses, databases, and analytical systems
- Configure and provide availability for data-access tools used by all data scientists
Data Scientist
Requirements
● B.Tech/Masters in Mathematics, Statistics, Computer Science or another
quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,
Predictive modeling, Clustering, Deep Learning stack, NLP
● Working knowledge of Tensorflow/PyTorch
Optional Add-ons-
● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark
etc.
● Experience with databases: MongoDB
In 2020, Renew Power, India’s largest renewables developer, acquired Climate Connect. Following ReNew’s listing on NASDAQ in summer 2021, Climate Connect has become the technology anchor of a new fully independent subsidiary - Climate Connect Digital. With backing from ReNew as the anchor investor to pursue an ambitious and visionary new strategy for rapid organic and inorganic growth.
Our mission has technology at its core and involves unlocking value through intelligent software, digitalisation, and ‘horizontal integration’ across the energy ecosystem. However, computational power and machine learning in the energy sector have yet to be fully leveraged and can create massive value.
We are looking for people with knowledge of:
● Excellent verbal communications, including the ability to clearly and concisely articulate complex concepts to both technical and non-technical collaborators
● Demonstrated history of knowledge in Computer Science, Statistics, Mathematics, Software Engineering or related technical fields
● Industry experience with proven ability to apply scientific methods to solve real-world problems on large scale data
● Extensive experience with Python and SQL for software development, data analysis, and machine learning
● Experience on Libraries: TensorFlow, Keras, Numpy, sklearn, pandas, scikit-image, matplotlib, Jupyter, Statsmodels
● Experience on Time Series analysis, including EDA, Statistical inferences, ARIMA, GARCH
● Knowledge of Cluster Analysis, Classification Trees, Discriminant Analysis, Neural Networks, Deep Learning, Logistic Regression, Associations Analysis
● Hands-on experience in implementing Deep learning models with video and time series data (CNN, LSTM- s, Aotoencoder, RBM)
● Experience of Regression, Multicriteria Decision Making, Descriptive Statistics, Hypothesis Testing, Segmentation/ Classification, Predictive Analytics
● Aptitude and experience in applied statistics and machine learning techniques
● Firm grasp of visualization tools interactive and self-serving such as business intelligence and notebooks
● Experience launching production-quality machine learning models at scale e.g. dataset construction, preprocessing, deployment, monitoring, quality assurance
● Experience with math programming is an added advantage. For example: optimization, computational geometry, numerical linear algebra, etc.
What you’ll work on:
We are developing a marketing automation platform through which an electricity retailer may apply a suite of proprietary ML algorithms to optimize outcomes across a range of channels and touchpoints. We require the services of a data science professional who can design and implement various AI/ML models that optimize the performance, quality, and reliability of the product. This position offers a potential pathway to leading an entire ML expert team. These are a few things you can look forward to working on:
● Translating high-level problems and key objectives into granular model requirements.
● Defining acceptance criteria that are well structured, detailed, and comprehensive.
● Developing and testing algorithms using our price forecasts, and customers' energy portfolio.
● Collaborating with the software engineering team in deploying the developed models tailored to specific customer needs.
● Participating in the software development process, and doing the required testing, and debugging to support the deployed models.
● Taking responsibility for ensuring tracking of appropriate events/metrics, so that monitoring is timely and rigorous.
● Driving the response to the discovery of regressions or failures, by undertaking various exercises (e.g. debugging, RCA, etc.) as needed
Experience:
● 6-11 years of experience in the field of Data Sciences or Machine Learning Qualifications:
● B.E / B. Tech / M. Tech / PhD in CS/IT or Data Sciences
What’s in it for you
We offer competitive salaries based on prevailing market rates. In addition to your introductory package, you can expect to receive the following benefits:
Flexible working hours
Unlimited annual leaves
Learning and development budget
Medical insurance/Term insurance, Gratuity benefits over and above the salaries
Access to industry and domain thought leaders
At Climate Connect Digital, you get a rare opportunity to join an established company at the early stages of a significant and well-backed global growth push.
Link to apply - https://climateconnect.digital/careers/?jobId=gaG9dgeTYBvF
o Strong Python development skills, with 7+ yrs. experience with SQL.
o A bachelor or master’s degree in Computer Science or related areas
o 5+ years of experience in data integration and pipeline development
o Experience in Implementing Databricks Delta lake and data lake
o Expertise designing and implementing data pipelines using modern data engineering approach and tools: SQL, Python, Delta Lake, Databricks, Snowflake Spark
o Experience in working with multiple file formats (Parque, Avro, Delta Lake) & API
o experience with AWS Cloud on data integration with S3.
o Hands on Development experience with Python and/or Scala.
o Experience with SQL and NoSQL databases.
o Experience in using data modeling techniques and tools (focused on Dimensional design)
o Experience with micro-service architecture using Docker and Kubernetes
o Have experience working with one or more of the public cloud providers i.e. AWS, Azure or GCP
o Experience in effectively presenting and summarizing complex data to diverse audiences through visualizations and other means
o Excellent verbal and written communications skills and strong leadership capabilities
Skills:
ML
MOdelling
Python
SQL
Azure Data Lake, dataFactory, Databricks, Delta Lake
Job Description
Experience: 3+ yrs
We are looking for a MySQL DBA who will be responsible for ensuring the performance, availability, and security of clusters of MySQL instances. You will also be responsible for design of database, database architecture, orchestrating upgrades, backups, and provisioning of database instances. You will also work in tandem with the other teams, preparing documentations and specifications as required.
Responsibilities:
Database design and data architecture
Provision MySQL instances, both in clustered and non-clustered configurations
Ensure performance, security, and availability of databases
Prepare documentations and specifications
Handle common database procedures, such as upgrade, backup, recovery, migration, etc.
Profile server resource usage, optimize and tweak as necessary
Skills and Qualifications:
Proven expertise in database design and data architecture for large scale systems
Strong proficiency in MySQL database management
Decent experience with recent versions of MySQL
Understanding of MySQL's underlying storage engines, such as InnoDB and MyISAM
Experience with replication configuration in MySQL
Knowledge of de-facto standards and best practices in MySQL
Proficient in writing and optimizing SQL statements
Knowledge of MySQL features, such as its event scheduler
Ability to plan resource requirements from high level specifications
Familiarity with other SQL/NoSQL databases such as Cassandra, MongoDB, etc.
Knowledge of limitations in MySQL and their workarounds in contrast to other popular relational databases
Position: Big Data Engineer
What You'll Do
Punchh is seeking to hire Big Data Engineer at either a senior or tech lead level. Reporting to the Director of Big Data, he/she will play a critical role in leading Punchh’s big data innovations. By leveraging prior industrial experience in big data, he/she will help create cutting-edge data and analytics products for Punchh’s business partners.
This role requires close collaborations with data, engineering, and product organizations. His/her job functions include
- Work with large data sets and implement sophisticated data pipelines with both structured and structured data.
- Collaborate with stakeholders to design scalable solutions.
- Manage and optimize our internal data pipeline that supports marketing, customer success and data science to name a few.
- A technical leader of Punchh’s big data platform that supports AI and BI products.
- Work with infra and operations team to monitor and optimize existing infrastructure
- Occasional business travels are required.
What You'll Need
- 5+ years of experience as a Big Data engineering professional, developing scalable big data solutions.
- Advanced degree in computer science, engineering or other related fields.
- Demonstrated strength in data modeling, data warehousing and SQL.
- Extensive knowledge with cloud technologies, e.g. AWS and Azure.
- Excellent software engineering background. High familiarity with software development life cycle. Familiarity with GitHub/Airflow.
- Advanced knowledge of big data technologies, such as programming language (Python, Java), relational (Postgres, mysql), NoSQL (Mongodb), Hadoop (EMR) and streaming (Kafka, Spark).
- Strong problem solving skills with demonstrated rigor in building and maintaining a complex data pipeline.
- Exceptional communication skills and ability to articulate a complex concept with thoughtful, actionable recommendations.