Who is IDfy?
IDfy is the Fintech ScaleUp of the Year 2021. We build technology products that identify people accurately. This helps businesses prevent fraud and engage with the genuine with the least amount of friction. If you have opened an account with HDFC Bank or ordered from Amazon and Zomato or transacted through Paytm and BharatPe or played on Dream11 and MPL, you might have already experienced IDfy. Without even knowing it. Well…that’s just how we roll. Global credit rating giant TransUnion is an investor in IDfy. So are international venture capitalists like MegaDelta Capital, BEENEXT, and Dream Incubator. Blume Ventures is an early investor and continues to place its faith in us. We have kept our 500 clients safe from fraud while helping the honest get the opportunities they deserve. Our 350-people strong family works and plays out of our offices in suburban Mumbai. IDfy has run verifications on 100 million people. In the next 2 years, we want to touch a billion users. If you wish to be part of this journey filled with lots of action and learning, we welcome you to be part of the team!
What are we looking for?
As a senior software engineer in Data Fabric POD, you would be responsible for producing and implementing functional software solutions. You will work with upper management to define software requirements and take the lead on operational and technical projects. You would be working with a data management and science platform which provides Data as a service (DAAS) and Insight as a service (IAAS) to internal employees and external stakeholders.
You are eager to learn technology-agnostic who loves working with data and drawing insights from it. You have excellent organization and problem-solving skills and are looking to build the tools of the future. You have exceptional communication skills and leadership skills and the ability to make quick decisions.
YOE: 3 - 10 yrs
Position: Sr. Software Engineer/Module Lead/Technical Lead
Responsibilities:
- Work break-down and orchestrating the development of components for each sprint.
- Identifying risks and forming contingency plans to mitigate them.
- Liaising with team members, management, and clients to ensure projects are completed to standard.
- Inventing new approaches to detecting existing fraud. You will also stay ahead of the game by predicting future fraud techniques and building solutions to prevent them.
- Developing Zero Defect Software that is secured, instrumented, and resilient.
- Creating design artifacts before implementation.
- Developing Test Cases before or in parallel with implementation.
- Ensuring software developed passes static code analysis, performance, and load test.
- Developing various kinds of components (such as UI Components, APIs, Business Components, image Processing, etc. ) that define the IDfy Platforms which drive cutting-edge Fraud Detection and Analytics.
- Developing software using Agile Methodology and tools that support the same.
Requirements:
- Apache BEAM, Clickhouse, Grafana, InfluxDB, Elixir, BigQuery, Logstash.
- An understanding of Product Development Methodologies.
- Strong understanding of relational databases especially SQL and hands-on experience with OLAP.
- Experience in the creation of data ingestion pipelines and ETL pipeline (Good to have Apache Beam or Apache Airflow experience).
- Strong design skills in defining API Data Contracts / OOAD / Microservices / Data Models.
Good to have:
- Experience with TimeSeries DBs (we use InfluxDB) and Alerting / Anomaly Detection Frameworks.
- Visualization Layers: Metabase, PowerBI, Tableau.
- Experience in developing software in the Cloud such as GCP / AWS.
- A passion to explore new technologies and express yourself through technical blogs.
About IDfy
Similar jobs
Data Analyst
at building a cutting-edge data science department to serve the older adult community and marketplace.
We are currently seeking talented and highly motivated Data Analyst to lead in the development of our discovery and support platform. The successful candidate will join a small, global team of data focused associates that have successfully built, and maintained a best of class traditional, Kimball based, SQL server founded, data warehouse and Qlik Sense based BI Dashboards. The successful candidate will lead the conversion of managing our master data set, developing reports and analytics dashboards.
To do well in this role you need a very fine eye for detail, experience as a data analyst, and deep understanding of the popular data analysis tools and databases.
Specific responsibilities will be to:
- Managing master data, including creation, updates, and deletion.
- Managing users and user roles.
- Provide quality assurance of imported data, working with quality assurance analysts if necessary.
- Commissioning and decommissioning of data sets.
- Processing confidential data and information according to various compliance.
- Helping develop reports and analysis.
- Managing and designing the reporting environment, including data sources, security, and metadata.
- Supporting the data warehouse in identifying and revising reporting requirements.
- Supporting initiatives for data integrity and normalization.
- Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems.
- Generating reports from single or multiple systems.
- Troubleshooting the reporting database environment and reports.
- Evaluating changes and updates to source production systems.
- Training end-users on new reports and dashboards.
- Providing technical expertise in data storage structures, data mining, and data cleansing.
Job Requirements:
- Master’s Degree (or equivalent experience) in computer science, data science or a scientific field that has relevance to healthcare in the United States.
- Work experience as a data analyst or in a related field for more than 5 years.
- Proficiency in statistics, data analysis, data visualization and research methods.
- Strong SQL and Excel skills with ability to learn other analytic tools.
- Experience with BI dashboard tools like Qlik Sense, Tableau, Power BI.
- Experience with AWS services like EC2, S3, Athena and QuickSight.
- Ability to work with stakeholders to assess potential risks.
- Ability to analyze existing tools and databases and provide software solution recommendations.
- Ability to translate business requirements into non-technical, lay terms.
- High-level experience in methodologies and processes for managing large-scale databases.
- Demonstrated experience in handling large data sets and relational databases.
- Understanding of addressing and metadata standards.
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Responsibilities:
- Design, construct, install, test and maintain data pipeline and data management systems.
- Ensure that all systems meet the business/company requirements as well as industry practices.
- Integrate up-and-coming data management and software engineering technologies into existing data structures.
- Processes for data mining, data modeling, and data production.
- Create custom software components and analytics applications.
- Collaborate with members of your team (eg, Data Architects, the Software team, Data Scientists) on the project's goals.
- Recommend different ways to constantly improve data reliability and quality.
Requirements:
- Experience in a related field with real-world skills and testimonials from former employees.
- Familiar with data warehouses like Redshift, Bigquery and Athena.
- Familiar with data processing systems like flink, spark and storm. Develop set
- Proficiency in Python and SQL. Possible work experience and proof of technical expertise.
- You may also consider a Master's degree in computer engineering or science in order to fine-tune your skills while on the job. (Although a Master's isn't required, it is always appreciated).
- Intellectual curiosity to find new and unusual ways of how to solve data management issues.
- Ability to approach data organization challenges while keeping an eye on what's important.
- Minimal data science knowledge is a Must, should understand a bit of analytics.
About Us
upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.
-
upGrad was awarded the Best Tech for Education by IAMAI for 2018-19
-
upGrad was also ranked as one of the LinkedIn Top Startups 2018: The 25 most sought-
after startups in India
-
upGrad was earlier selected as one of the top ten most innovative companies in India
by FastCompany.
-
We were also covered by the Financial Times along with other disruptors in Ed-Tech
-
upGrad is the official education partner for Government of India - Startup India
program
-
Our program with IIIT B has been ranked #1 program in the country in the domain of Artificial Intelligence and Machine Learning
Role Summary
Are you excited by the challenge and the opportunity of applying data-science and data- analytics techniques to the fast developing education technology domain? Do you look forward to, the sense of ownership and achievement that comes with innovating and creating data products from scratch and pushing it live into Production systems? Do you want to work with a team of highly motivated members who are on a mission to empower individuals through education?
If this is you, come join us and become a part of the upGrad technology team. At upGrad the technology team enables all the facets of the business - whether it’s bringing efficiency to ourmarketing and sales initiatives, to enhancing our student learning experience, to empowering our content, delivery and student success teams, to aiding our student’s for their desired careeroutcomes. We play the part of bringing together data & tech to solve these business problems and opportunities at hand.
We are looking for an highly skilled, experienced and passionate data-scientist who can come on-board and help create the next generation of data-powered education tech product. The ideal candidate would be someone who has worked in a Data Science role before wherein he/she is comfortable working with unknowns, evaluating the data and the feasibility of applying scientific techniques to business problems and products, and have a track record of developing and deploying data-science models into live applications. Someone with a strong math, stats, data-science background, comfortable handling data (structured+unstructured) as well as strong engineering know-how to implement/support such data products in Production environment.
Ours is a highly iterative and fast-paced environment, hence being flexible, communicating well and attention-to-detail are very important too. The ideal candidate should be passionate about the customer impact and comfortable working with multiple stakeholders across the company.
Roles & Responsibilities-
- 3+ years of experience in analytics, data science, machine learning or comparable role
- Bachelor's degree in Computer Science, Data Science/Data Analytics, Math/Statistics or related discipline
- Experience in building and deploying Machine Learning models in Production systems
- Strong analytical skills: ability to make sense out of a variety of data and its relation/applicability to the business problem or opportunity at hand
- Strong programming skills: comfortable with Python - pandas, numpy, scipy, matplotlib; Databases - SQL and noSQL
- Strong communication skills: ability to both formulate/understand the business problem at hand as well as ability to discuss with non data-science background stakeholders
- Comfortable dealing with ambiguity and competing objectives
Skills Required
-
Experience in Text Analytics, Natural Language Processing
-
Advanced degree in Data Science/Data Analytics or Math/Statistics
-
Comfortable with data-visualization tools and techniques
-
Knowledge of AWS and Data Warehousing
-
Passion for building data-products for Production systems - a strong desire to impact
the product through data-science technique
-
Informatica Big Data Management
Work days- Sun-Thu
Day shift
Business Intelligence Lead
at Kaleidofin
make an impact by enabling innovation and growth; someone with passion for what they do and a vision for the future.
Responsibilities:
- Be the analytical expert in Kaleidofin, managing ambiguous problems by using data to execute sophisticated quantitative modeling and deliver actionable insights.
- Develop comprehensive skills including project management, business judgment, analytical problem solving and technical depth.
- Become an expert on data and trends, both internal and external to Kaleidofin.
- Communicate key state of the business metrics and develop dashboards to enable teams to understand business metrics independently.
- Collaborate with stakeholders across teams to drive data analysis for key business questions, communicate insights and drive the planning process with company executives.
- Automate scheduling and distribution of reports and support auditing and value realization.
- Partner with enterprise architects to define and ensure proposed.
- Business Intelligence solutions adhere to an enterprise reference architecture.
- Design robust data-centric solutions and architecture that incorporates technology and strong BI solutions to scale up and eliminate repetitive tasks
Requirements:
- Experience leading development efforts through all phases of SDLC.
- 5+ years "hands-on" experience designing Analytics and Business Intelligence solutions.
- Experience with Quicksight, PowerBI, Tableau and Qlik is a plus.
- Hands on experience in SQL, data management, and scripting (preferably Python).
- Strong data visualisation design skills, data modeling and inference skills.
- Hands-on and experience in managing small teams.
- Financial services experience preferred, but not mandatory.
- Strong knowledge of architectural principles, tools, frameworks, and best practices.
- Excellent communication and presentation skills to communicate and collaborate with all levels of the organisation.
- Team handling preferred for 5+yrs experience candidates.
- Notice period less than 30 days.
Senior Data Engineer
at Bookr Inc
In this role you'll get.
- Being part of core team member for data platform, setup platform foundation while adhering all required quality standards and design patterns
- Write efficient and quality code that can scale
- Adopt Bookr quality standards, recommend process standards and best practices
- Research, learn & adapt new technologies to solve problems & improve existing solutions
- Contribute to engineering excellence backlog
- Identify performance issues
- Effective code and design reviews
- Improve reliability of overall production system by proactively identifying patterns of failure
- Leading and mentoring junior engineers by example
- End-to-end ownership of stories (including design, serviceability, performance, failure handling)
- Strive hard to provide the best experience to anyone using our products
- Conceptualise innovative and elegant solutions to solve challenging big data problems
- Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products
- Adhere to company policies, procedures, mission, values, and standards of ethics and integrity
On day one we'll expect you to.
- B. E/B. Tech from a reputed institution
- Minimum 5 years of software development experience and at least a year experience in leading/guiding people
- Expert coding skills in Python/PySpark or Java/Scala
- Deep understanding in Big Data Ecosystem - Hadoop and Spark
- Must have project experience with Spark
- Ability to independently troubleshoot Spark jobs
- Good understanding of distributed systems
- Fast learner and quickly adapt to new technologies
- Prefer individuals with high ownership and commitment
- Expert hands on experience with RDBMS
- Fast learner and quickly adapt to new technologies
- Prefer individuals with high ownership and commitment
- Ability to work independently as well as working collaboratively in a team
Added bonuses you have.
- Hands on experience with EMR/Glue/Data bricks
- Hand on experience with Airflow
- Hands on experience with AWS Big Data ecosystem
We are looking for passionate Engineers who are always hungry for challenging problems. We believe in creating opportunistic, yet balanced, work environment for savvy, entrepreneurial tech individuals. We are thriving on remote work with team working across multiple timezones.
- Flexible hours & Remote work - We are a results focused bunch, so we encourage you to work whenever and wherever you feel most creative and focused.
- Unlimited PTOWe want you to feel free to recharge your batteries when you need it!
- Stock Options - Opportunity to participate in Company stock plan
- Flat hierarchy - Team leaders at your fingertips
- BFC(Stands for bureaucracy-free company). We're action oriented and don't bother with dragged-out meetings or pointless admin exercises - we'd rather get our hands dirty!
- Working along side Leaders - You being part of core team, will give you opportunity to directly work with founding and management team
Data Scientist
Ganit Inc. is the fastest growing Data Science & AI company in Chennai.
Founded in 2017, by 3 industry experts who are alumnus of IITs/SPJIMR with each of them having 17+ years of experience in the field of analytics.
We are in the business of maximising Decision Making Power (DMP) for companies by providing solutions at the intersection of hypothesis based analytics, discovery based AI and IoT. Our solutions are a combination of customised services and functional product suite.
We primarily operate as a US-based start-up and have clients across US, Asia-Pacific, Middle-East and have offices in USA - New Jersey & India - Chennai.
Started with 3 people, the company is fast growing with 100+ employees
1. What do we expect from you
- Should posses minimum 2 years of experience of data analytics model development and deployment
- Skills relating to core Statistics & Mathematics.
- Huge interest in handling numbers
- Ability to understand all domains in businesses across various sectors
- Natural passion towards numbers, business, coding, visualisation
2. Necessary skill set:
- Proficient in R/Python, Advanced Excel, SQL
- Should have worked with Retail/FMCG/CPG projects solving analytical problems in Sales/Marketing/Supply Chain functions
- Very good understanding of algorithms, mathematical models, statistical techniques, data mining, like Regression models, Clustering/ Segmentation, time series forecasting, Decision trees/Random forest, etc.
- Ability to choose the right model for the right data and translate that into code in R, Python, VBA (Proven capabilities)
- Should have handled large datasets and with through understanding of SQL
- Ability to handle a team of Data Analysts
3. Good to have skill set:
- Microsoft PowerBI / Tableau / Qlik View / Spotfire
4. Job Responsibilities:
- Translate business requirements into technical requirements
- Data extraction, preparation and transformation
- Identify, develop and implement statistical techniques and algorithms that address business challenges and adds value to the organisation
- Create and implement data models
- Interact with clients for queries and delivery adoption
5. Screening Methodology
- Problem Solving round (Telephonic Conversation)
- Technical discussion round (Telephonic Conversation)
- Final fitment discussion (Video Round
job Description
Problem Formulation: Identifies possible options to address the business problems and must possess good understanding of dimension modelling
Must have worked on at least one end to end project using any Cloud Datawarehouse (Azure Synapses, AWS Redshift, Google Big query)
Good to have an understand of POWER BI and integration with any Cloud services like Azure or GCP
Experience of working with SQL Server, SSIS(Preferred)
Applied Business Acumen: Supports the development of business cases and recommendations. Owns delivery of project activity and tasks assigned by others. Supports process updates and changes. Solves business issues.
Data Transformation/Integration/Optimization:
The ETL developer is responsible for designing and creating the Data warehouse and all related data extraction, transformation and load of data function in the company
The developer should provide the oversight and planning of data models, database structural design and deployment and work closely with the data architect and Business analyst
Duties include working in a cross functional software development teams (Business analyst, Testers, Developers) following agile ceremonies and development practices.
The developer plays a key role in contributing to the design, evaluation, selection, implementation and support of databases solution.
Development and Testing: Develops codes for the required solution by determining the appropriate approach and leveraging business, technical, and data requirements.
Creates test cases to review and validate the proposed solution design. Work on POCs and deploy the software to production servers.
Good to Have (Preferred Skills):
- Minimum 4-8 Years of experience in Data warehouse design and development for large scale application
- Minimum 3 years of experience with star schema, dimensional modelling and extract transform load (ETL) Design and development
- Expertise working with various databases (SQL Server, Oracle)
- Experience developing Packages, Procedures, Views and triggers
- Nice to have Big data technologies
- The individual must have good written and oral communication skills.
- Nice to have SSIS
Education and Experience
- Minimum 4-8 years of software development experience
- Bachelor's and/or Master’s degree in computer science
Please revert back with below details.
Total Experience:
Relevant Experience:
Current CTC:
Expected CTC:
Any offers: Y/N
Notice Period:
Qualification:
DOB:
Present Company Name:
Designation:
Domain
Reason for job change:
Current Location: