About Busigence Technologies
Busigence is a Decision Intelligence Company. We create decision intelligence products for real people by combining data, technology, business, and behavior enabling strengthened decisions.
Scaling established startup by IIT alumni innovating & disrupting marketing domain through artificial intelligence. We bring those people onboard who are dedicated to deliver wisdom to humanity by solving the world's most pressing problems differently thereby significantly impacting thousands of souls, everyday.
We are a deep rooted organization with six years of success story having worked with folks from top tier background (IIT, NSIT, DCE, BITS, IIITs, NITs, IIMs, ISI etc.) maintaining an awesome culture with a common vision to build great data products.
In past we have served fifty five customers and presently developing our second product, Robonate . First was emmoQ - an emotion intelligence platform. Third offering, H2HData , an innovation lab where we solve hard problems through data, science, & design.
We work extensively & intensely on big data, data science, machine learning, deep learning, reinforcement learning, data analytics, natural language processing, cognitive computing, and business intelligence.
We try real hard to hire fun loving crazy folks who are driven by more than a paycheck. You shall be working with creamiest talent on extremely challenging problems at most happening workplace.
Our mission is to make the world decision intelligent. We envision to have worked on atleast 1% of worlds data by 2020.
Why Explore a Career at Busigence
This section should have been entitled - Why Explore a Challenge at Busigence. What?
Busigence is not for everyone. This is the strongest differentiator. How?
Skills are secondary for us. We believe in intentions, not capabilities. Why?
------------------------------------------------------------------------------------------------------------------------
If the above three are not understood and/or interest you, we would encourage you refrain applying to us. You won't be actually able to work with us.
80-85% candidates looks for a job in an open position. 85-98% forsee a career in it. Hardly 2% are able to realise it as a challenge, which may satisfy their soul.
We look for these 2%. PERIOD.
If you happen to be fortunately falling in this group then world is too small for you. Reason being there are very few organisations which can really meet your expectations.
We can! Busigence works on real hard problems. Solving customer's problem is our passion.
1. We do where world is moving. Artificial Intelligence. Real AI.
2. We are a real startup culture (this is not bean bags or open office or flexible hours. It is a spirit to create something which doesn't exist).
3. We hire creme de la creme. Coincidentally, it happens to be candidates from topmost tier (IIT & equivalent). People enjoy working with like minded people.
4. You will be empowered everyday and make you feel an entrepreneurial trait in you.
5. This will be Greatest Work of Your Life. Promise!
Busigence Interview Process
We don't hire, we handpick - Believe with us. Laugh with us. Work with us.
For formality,
Step 1 : Apply iff you meet Real Role's & Ideal You's
Step 2 : Round 0: First call followed by Application Form
Step 3 : Round 1: Technology/ Process/ Business Capability Evaluation
Step 4 : Round 2: Day Spent (1 or 2 days) working with us
Enough. We are Done.
Similar jobs
Sr. Data Scientist (Global Media Agency)
at Global Media Agency - A client of Merito
Our client combines Adtech and Martech platform strategy with data science & data engineering expertise, helping our clients make advertising work better for people.
- Act as primary day-to-day contact on analytics to agency-client leads
- Develop bespoke analytics proposals for presentation to agencies & clients, for delivery within the teams
- Ensure delivery of projects and services across the analytics team meets our stakeholder requirements (time, quality, cost)
- Hands on platforms to perform data pre-processing that involves data transformation as well as data cleaning
- Ensure data quality and integrity
- Interpret and analyse data problems
- Build analytic systems and predictive models
- Increasing the performance and accuracy of machine learning algorithms through fine-tuning and further
- Visualize data and create reports
- Experiment with new models and techniques
- Align data projects with organizational goals
Requirements
- Min 6 - 7 years’ experience working in Data Science
- Prior experience as a Data Scientist within a digital media is desirable
- Solid understanding of machine learning
- A degree in a quantitative field (e.g. economics, computer science, mathematics, statistics, engineering, physics, etc.)
- Experience with SQL/ Big Query/GMP tech stack / Clean rooms such as ADH
- A knack for statistical analysis and predictive modelling
- Good knowledge of R, Python
- Experience with SQL, MYSQL, PostgreSQL databases
- Knowledge of data management and visualization techniques
- Hands-on experience on BI/Visual Analytics Tools like PowerBI or Tableau or Data Studio
- Evidence of technical comfort and good understanding of internet functionality desirable
- Analytical pedigree - evidence of having approached problems from a mathematical perspective and working through to a solution in a logical way
- Proactive and results-oriented
- A positive, can-do attitude with a thirst to continually learn new things
- An ability to work independently and collaboratively with a wide range of teams
- Excellent communication skills, both written and oral
Job brief
We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Requirements
1. 2 to 4 years of relevant industry experience
2. Experience in Linear algebra, statistics & Probability skills, such as distributions, Deep Learning, Machine Learning
3. Strong mathematical and statistics background is a must
4. Experience in machine learning frameworks such as Tensorflow, Caffe, PyTorch, or MxNet
5. Strong industry experience in using design patterns, algorithms and data structures
6. Industry experience in using feature engineering, model performance tuning, and optimizing machine learning models
7. Hands on development experience in Python and packages such as NumPy, Sci-Kit Learn and Matplotlib
8. Experience in model building, hyper
We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.
We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.
Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!
Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc.
Key Responsibilities
-
Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality
-
Work with the Office of the CTO as an active member of our architecture guild
-
Writing pipelines to consume the data from multiple sources
-
Writing a data transformation layer using DBT to transform millions of data into data warehouses.
-
Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities
-
Identify downstream implications of data loads/migration (e.g., data quality, regulatory)
What To Bring
-
5+ years of software development experience, a startup experience is a plus.
-
Past experience of working with Airflow and DBT is preferred
-
5+ years of experience working in any backend programming language.
-
Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL
-
Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)
-
Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects
-
Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure
-
Basic understanding of Kubernetes & docker is a must.
-
Experience in data processing (ETL, ELT) and/or cloud-based platforms
-
Working proficiency and communication skills in verbal and written English.
CustomerGlu is a low code interactive user engagement platform. We're backed by Techstars and top-notch VCs from the US like Better Capital and SmartStart.
As we begin building repeatability in our core product offering at CustomerGlu - building a high-quality data infrastructure/applications is emerging as a key requirement to further drive more ROI from our interactive engagement programs and to also get ideas for new campaigns.
Hence we are adding more team members to our existing data team and looking for a Data Engineer.
Responsibilities
- Design and build a high-performing data platform that is responsible for the extraction, transformation, and loading of data.
- Develop low-latency real-time data analytics and segmentation applications.
- Setup infrastructure for easily building data products on top of the data platform.
- Be responsible for logging, monitoring, and error recovery of data pipelines.
- Build workflows for automated scheduling of data transformation processes.
- Able to lead a team
Requirements
- 3+ years of experience and ability to manage a team
- Experience working with databases like MongoDB and DynamoDB.
- Knowledge of building batch data processing applications using Apache Spark.
- Understanding of how backend services like HTTP APIs and Queues work.
- Write good quality, maintainable code in one or more programming languages like Python, Scala, and Java.
- Working knowledge of version control systems like Git.
Bonus Skills
- Experience in real-time data processing using Apache Kafka or AWS Kinesis.
- Experience with AWS tools like Lambda and Glue.
Data Scientist
at Symansys Technologies India Pvt Ltd
Specialism- Advance Analytics, Data Science, regression, forecasting, analytics, SQL, R, python, decision tree, random forest, SAS, clustering classification
Senior Analytics Consultant- Responsibilities
- Understand business problem and requirements by building domain knowledge and translate to data science problem
- Conceptualize and design cutting edge data science solution to solve the data science problem, apply design thinking concepts
- Identify the right algorithms , tech stack , sample outputs required to efficiently adder the end need
- Prototype and experiment the solution to successfully demonstrate the value
Independently or with support from team execute the conceptualized solution as per plan by following project management guidelines - Present the results to internal and client stakeholder in an easy to understand manner with great story telling, story boarding, insights and visualization
- Help build overall data science capability for eClerx through support in pilots, pre sales pitches, product development , practice development initiatives
What are we looking for:
- Strong experience in MySQL and writing advanced queries
- Strong experience in Bash and Python
- Familiarity with ElasticSearch, Redis, Java, NodeJS, ClickHouse, S3
- Exposure to cloud services such as AWS, Azure, or GCP
- 2+ years of experience in the production support
- Strong experience in log management and performance monitoring like ELK, Prometheus + Grafana, logging services on various cloud platforms
- Strong understanding of Linux OSes like Ubuntu, CentOS / Redhat Linux
- Interest in learning new languages / framework as needed
- Good written and oral communications skills
- A growth mindset and passionate about building things from the ground up, and most importantly, you should be fun to work with
As a product solutions engineer, you will:
- Analyze recorded runtime issues, diagnose and do occasional code fixes of low to medium complexity
- Work with developers to find and correct more complex issues
- Address urgent issues quickly, work within and measure against customer SLAs
- Using shell and python scripts, and use scripting to actively automate manual / repetitive activities
- Build anomaly detectors wherever applicable
- Pass articulated feedback from customers to the development and product team
- Maintain ongoing record of the operation of problem analysis and resolution in a on call monitoring system
- Offer technical support needed in development
Should design and operate data pipe lines.
Build and manage analytics platform using Elastic search, Redshift, Mongo db.
Strong programming fundamentals in Datastructures and algorithms.
We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations , clinical named entity recognition and information extraction from clinical notes.
This is a role for highly technical machine learning & data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.
You will be responsible for setting an agenda to develop and ship machine learning models that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company, and help build a foundation of tools and practices used by quantitative staff across the company.
What you will achieve:
-
Define the research vision for data science, and oversee planning, staffing, and prioritization to make sure the team is advancing that roadmap
-
Invest in your team’s skills, tools, and processes to improve their velocity, including working with engineering counterparts to shape the roadmap for machine learning needs
-
Hire, retain, and develop talented and diverse staff through ownership of our data science hiring processes, brand, and functional leadership of data scientists
-
Evangelise machine learning and AI internally and externally, including attending conferences and being a thought leader in the space
-
Partner with the executive team and other business leaders to deliver cross-functional research work and models
Required Skills:
-
Strong background in classical machine learning and machine learning deployments is a must and preferably with 4-8 years of experience
-
Knowledge of deep learning & NLP
-
Hands-on experience in TensorFlow/PyTorch, Scikit-Learn, Python, Apache Spark & Big Data platforms to manipulate large-scale structured and unstructured datasets.
-
Experience with GPU computing is a plus.
-
Professional experience as a data science leader, setting the vision for how to most effectively use data in your organization. This could be through technical leadership with ownership over a research agenda, or developing a team as a personnel manager in a new area at a larger company.
-
Expert-level experience with a wide range of quantitative methods that can be applied to business problems.
-
Evidence you’ve successfully been able to scope, deliver and sell your own research in a way that shifts the agenda of a large organization.
-
Excellent written and verbal communication skills on quantitative topics for a variety of audiences: product managers, designers, engineers, and business leaders.
-
Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling
Qualifications
-
Professional experience as a data science leader, setting the vision for how to most effectively use data in your organization
-
Expert-level experience with machine learning that can be applied to business problems
-
Evidence you’ve successfully been able to scope, deliver and sell your own work in a way that shifts the agenda of a large organization
-
Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling
-
Degree in a field that has very applicable use of data science / statistics techniques (e.g. statistics, applied math, computer science, OR a science field with direct statistics application)
-
5+ years of industry experience in data science and machine learning, preferably at a software product company
-
3+ years of experience managing data science teams, incl. managing/grooming managers beneath you
-
3+ years of experience partnering with executive staff on data topics
Data Engineer
Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.
Responsibilities and Duties :
- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.
- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions
Education level :
- Bachelor's degree in Computer Science or equivalent
Experience :
- Minimum 5+ years relevant experience working on production grade projects experience in hands on, end to end software development
- Expertise in application, data and infrastructure architecture disciplines
- Expert designing data integrations using ETL and other data integration patterns
- Advanced knowledge of architecture, design and business processes
Proficiency in :
- Modern programming languages like Java, Python, Scala
- Big Data technologies Hadoop, Spark, HIVE, Kafka
- Writing decently optimized SQL queries
- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)
- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions
- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.
- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.
- Experience generating physical data models and the associated DDL from logical data models.
- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.
- Experience enforcing data modeling standards and procedures.
- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.
- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals
Skills :
Must Know :
- Core big-data concepts
- Spark - PySpark/Scala
- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)
- Handling of various file formats
- Cloud platform - AWS/Azure/GCP
- Orchestration tool - Airflow