R Programmer for Academic Project work with Masters or PhD in Statistics with Advanced R Programming Experience
Project is on Monte Carlo simulation
Must have strong background on Monte Carlo simulation, matrix algebra, probability theory, and programming skills.
Would be required to develop 3-4 R projects under tight deadlines.
3-4 month assignment.
Must be familiar with
a)Monte Carlo Methods including Simulation, estimation, sampling, optimization, learning, and visualization.
b)Sequential Monte Carlo including Importance sampling and weighted samples, Advanced importance sampling techniques, Framework for sequential Monte Carlo,
(selection, pruning, resampling, ...), particle filtering in object tracking, Monte Carlo Tree Search
c)Markov Chains - The transition matrix, topology,Positive recurrence and invariant measures, Ergodicity theorem
d) Metropolis methods and its variants, Metropolis algorithm and the Hastings’s generalization, Reversible jumps and trans-dimensional MCMC
e) Gibbs sampler and its variants, Hit-and-run, Multi-grid, generalized Gibbs, Metropolized Gibbs
f) Clustering sampling - Ising/Potts models, Swendsen-Wang and clustering sampling
g) Langevin Dynamics - Hamiltonian Monte Carlo, Langevin dynamics used in machine learning, Convergence analysis
h) Equi-energy and mult-domain sampler,Wang-Landau algorithm,Attraction-Diffusion Algorithm,Mapping the energy landscape
About Webtiga Private limited
Experts in Web 3 Technologies -Metaverse, Generative AI, Web3,Cybersecurity and DeFi spoken
For more details - please check out our website- https://www.webtiga.com/solutions/
About Climate Connect Digital
Our team is inspired to change the world by making energy greener, and more affordable. Established in 2011 in London, UK, and now headquartered in Gurgaon, India. From unassuming beginnings, we have become a leading energy-AI software player, at the vanguard of accelerating the global energy transition.
In 2020, we were acquired by ReNew Power, India’s largest renewables developer. Following ReNew’s NASDAQ listing in summer 2021, Climate Connect Digital has been newly formed as a fully independent subsidiary. With backing from ReNew for an ambitious and visionary new strategy, for rapid growth.
Our mission has technology at its core and involves unlocking value by leveraging information technology and AI/ML across the energy ecosystem. However, the world is still nascent in using such technologies in the energy sector to create massive value for all stakeholders.
How do you fit in
As we start into our first strong growth phase, we are looking for a Data Engineer to build the data infrastructure to support business and product growth.
You are someone who can see projects through from beginning to end, coach others, and self-manage. We’re looking for an eager individual who can guide our data stack using AWS services with technical knowledge, communication skills, and real-world experience.
The data flowing through our platform directly contributes to decision-making by algorithms & all levels of leadership alike. If you’re passionate about building tools that enhance productivity, improve green energy, reduce waste, and improve work-life harmony for a large and rapidly growing finance user base, come join us!
- Iterate, build, and implement our data model, data warehousing, and data integration architecture using AWS & GCP services
- Build solutions that ingest data from source and partner systems into our data infrastructure, where the data is transformed, intelligently curated and made available for consumption by downstream operational and analytical processes
- Integrate data from source systems using common ETL tools or programming languages (e.g. Python, Scala, Node.js etc.)
- Develop tailor-made strategies, concepts and solutions for the efficient handling of our growing amounts of data
- Work iteratively with our data scientist to build up fact tables (e.g. container ship movements), dimension tables (e.g. weather data), ETL processes, and build the data catalog
- 4+ years of Experience designing, building and maintaining data architecture and warehousing using GCP/AWS services
- Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark, R, Python, C# and/or similar technologies
- Experience managing GCP/AWS resources using Terraform
- Experience in Data engineering and infrastructure work for analytical and machine learning processes
- Experience with ETL tooling, and migrating ETL code from one technology to another will be a benefit
- Experience with Data visualization / dashboarding tools as QA/QC data processes
- Independent, self-starter who thrives in a fast pace environment
What’s in it for you
We offer competitive salaries based on prevailing market rates. In addition to your introductory package, you can expect to receive the following benefits:
- Remote-first work culture
- Flexible working hours and leave policy
- Learning and development opportunities
- Medical insurance/Term insurance, Gratuity benefits over and above the salaries
At Climate Connect, you get a rare opportunity to join an established company at the early stages of a significant and well-backed global growth push.
We are building a remote-first organisation ingrained in the team ethos. We understand its importance for the success of any next-generation technology company. The team includes passionate and self-driven people with unconventional backgrounds, and we’re seeking a similar spirit with the right potential.
What it’s like to work with us
You become part of a strong network and an accomplished legacy from leading technology and business schools worldwide when you join us. Such as the Indian Institute of Technology, Oxford University, University of Cambridge, University College London, and many more.
We don’t believe in constrained traditional hierarchies and instead work in flexible teams with the freedom to achieve successful business outcomes. We want more people who can thrive in a fast-paced, collaborative environment. Our comprehensive support system comprises a global network of advisors and experts, providing unparalleled opportunities for learning and growth.
Location : Gurgaon
About the company:
The company is changing the way cataloging is done across the Globe. Our vision is to empower the smallest of sellers, situated in the farthest of corners, to create superior product images and videos, without the need for any external professional help. Imagine 30M+ merchants shooting Product Images or Videos using their Smartphones, and then choosing Filters for Amazon, Asos, Airbnb, Doordash, etc to instantly compose High-Quality "tuned-in" product visuals, instantly. The company has built the world’s leading image editing AI software, to capture and process beautiful product images for online selling. We are also fortunate and proud to be backed by the biggest names in the investment community including the likes of Accel Partners, Angellist and prominent Founders and Internet company operators, who believe that there is an intelligent and efficient way of doing Digital Production than how the world operates currently.
Job Description :
- We are looking for a seasoned Computer Vision Engineer with AI/ML/CV and Deep Learning skills to
play a senior leadership role in our Product & Technology Research Team.
- You will be leading a team of CV researchers to build models that automatically transform millions of e
commerce, automobiles, food, real-estate ram images into processed final images.
- You will be responsible for researching the latest art of the possible in the field of computer vision,
designing the solution architecture for our offerings and lead the Computer Vision teams to build the core
algorithmic models & deploy them on Cloud Infrastructure.
- Working with the Data team to ensure your data pipelines are well set up and
models are being constantly trained and updated
- Working alongside product team to ensure that AI capabilities are built as democratized tools that
provides internal as well external stakeholders to innovate on top of it and make our customers
- You will work closely with the Product & Engineering teams to convert the models into beautiful products
that will be used by thousands of Businesses everyday to transform their images and videos.
- Min 3+ years of work experience in Computer Vision with 5-10 years work experience overall
- BS/MS/ Phd degree in Computer Science, Engineering or a related subject from a ivy league institute
- Exposure on Deep Learning Techniques, TensorFlow/Pytorch
- Prior expertise on building Image processing applications using GANs, CNNs, Diffusion models
- Expertise with Image Processing Python libraries like OpenCV, etc.
- Good hands-on experience on Python, Flask or Django framework
- Authored publications at peer-reviewed AI conferences (e.g. NeurIPS, CVPR, ICML, ICLR,ICCV, ACL)
- Prior experience of managing teams and building large scale AI / CV projects is a big plus
- Great interpersonal and communication skills
- Critical thinker and problem-solving skills
1. Bridging the gap between IT and the business using data analytics to assess processes, determine requirements and deliver data-driven recommendations and reports to executives and stakeholders.
2. Ability to search, extract, transform and load data from various databases, cleanse and refine data until it is fit-for-purpose
3. Work within various time constraints to meet critical business needs, while measuring and identifying activities performed and ensuring service requirements are met
4. Prioritization of issues to meet deadlines while ensuring high-quality delivery
5. Ability to pull data and to perform ad hoc reporting and analysis as needed
6. Ability to adapt quickly to new and changing technical environments as well as strong analytical and problem-solving abilities
7. Strong interpersonal and presentation skills
1. Advanced skills in designing reporting interfaces and interactive dashboards in Google Sheets and Excel
2. Experience working with senior decision-makers
3. Strong advanced SQL/MySQL and Python skills with the ability to fetch data from the Data Warehouse as per the stakeholder's requirement
4. Good Knowledge and experience in Excel VBA and advanced excel
5. Good Experience in building Tableau analytical Dashboards as per the stake holder's reporting requirements
6. Strong communication/interpersonal skills
1. Experience in working on adhoc requirements
2. Ability to toggle around with shifting priorities
3. Experience in working for Fintech or E-commerce industry is preferable
4. Engineering 2+ years of experience as a Business Analyst for the finance processes
Fix issues with plugins for our Python-based ETL pipelines
Help with automation of standard workflow
Deliver Python microservices for provisioning and managing cloud infrastructure
Responsible for any refactoring of code
Effectively manage challenges associated with handling large volumes of data working to tight deadlines
Manage expectations with internal stakeholders and context-switch in a fast-paced environment
Thrive in an environment that uses AWS and Elasticsearch extensively
Keep abreast of technology and contribute to the engineering strategy
Champion best development practices and provide mentorship to others
First and foremost you are a Python developer, experienced with the Python Data stack
You love and care about data
Your code is an artistic manifest reflecting how elegant you are in what you do
You feel sparks of joy when a new abstraction or pattern arises from your code
You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
You are a continuous learner
You have a natural willingness to automate tasks
You have critical thinking and an eye for detail
Excellent ability and experience of working to tight deadlines
Sharp analytical and problem-solving skills
Strong sense of ownership and accountability for your work and delivery
Excellent written and oral communication skills
Mature collaboration and mentoring abilities
We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Delivering complex software, ideally in a FinTech setting
Experience with CI/CD tools such as Jenkins, CircleCI
Experience with code versioning (git / mercurial / subversion)
Strong knowledge in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, etc.
Sound Knowlegde querying databases and using statistical computer languages: R, Python, SQL, etc.
Strong understanding creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
Work Timing: 5 Days A Week
• Ensure right stakeholders gets right information at right time
• Requirement gathering with stakeholders to understand their data requirement
• Creating and deploying reports
• Participate actively in datamarts design discussions
• Work on both RDBMS as well as Big Data for designing BI Solutions
• Write code (queries/procedures) in SQL / Hive / Drill that is both functional and elegant,
following appropriate design patterns
• Design and plan BI solutions to automate regular reporting
• Debugging, monitoring and troubleshooting BI solutions
• Creating and deploying datamarts
• Writing relational and multidimensional database queries
• Integrate heterogeneous data sources into BI solutions
• Ensure Data Integrity of data flowing from heterogeneous data sources into BI solutions.
Minimum Job Qualifications:
• BE/B.Tech in Computer Science/IT from Top Colleges
• 1-5 years of experience in Datawarehousing and SQL
• Excellent Analytical Knowledge
• Excellent technical as well as communication skills
• Attention to even the smallest detail is mandatory
• Knowledge of SQL query writing and performance tuning
• Knowledge of Big Data technologies like Apache Hadoop, Apache Hive, Apache Drill
• Knowledge of fundamentals of Business Intelligence
• In-depth knowledge of RDBMS systems, Datawarehousing and Datamarts
• Smart, motivated and team oriented
• Sound knowledge of software development in Programming (preferably Java )
• Knowledge of the software development lifecycle (SDLC) and models
PriceLabs ( chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand ) is a cloud based software for vacation and short term rentals to help them dynamically manage prices just the way large hotels and airlines do! Our mission is to help small businesses in the travel and tourism industry by giving them access to advanced analytical systems that are often restricted to large companies.
We're looking for someone with strong analytical capabilities who wants to understand how our current architecture and algorithms work, and help us design and develop long lasting solutions to address those. Depending on the needs of the day, the role will come with a good mix of team-work, following our best practices, introducing us to industry best practices, independent thinking, and ownership of your work.
- Design, develop and enhance our pricing algorithms to enable new capabilities.
- Process, analyze, model, and visualize findings from our market level supply and demand data.
- Build and enhance internal and customer facing dashboards to better track metrics and trends that help customers use PriceLabs in a better way.
- Take ownership of product ideas and design discussions.
- Occasional travel to conferences to interact with prospective users and partners, and learn where the industry is headed.
- Bachelors, Masters or Ph. D. in Operations Research, Industrial Engineering, Statistics, Computer Science or other quantitative/engineering fields.
- Strong understanding of analysis of algorithms, data structures and statistics.
- Solid programming experience. Including being able to quickly prototype an idea and test it out.
- Strong communication skills, including the ability and willingness to explain complicated algorithms and concepts in simple terms.
- Experience with relational databases and strong knowledge of SQL.
- Experience building data heavy analytical models in the travel industry.
- Experience in the vacation rental industry.
- Experience developing dynamic pricing models.
- Prior experience working at a fast paced environment.
- Willingness to wear many hats.
- Data pre-processing, data transformation, data analysis, and feature engineering
- Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
- Required skills:
- Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
- Fluency in Python (Pandas), PySpark, SQL, or similar
- Azure data factory experience (min 12 months)
- Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
- Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
- Ability to work independently with demonstrated experience in project or program management
- Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment