Cutshort logo
ACT FIBERNET logo
AGM Data Engineering
AGM Data Engineering
ACT FIBERNET's logo

AGM Data Engineering

Sumit Sindhwani's profile picture
Posted by Sumit Sindhwani
9 - 14 yrs
₹20L - ₹36L / yr
Bengaluru (Bangalore)
Skills
Data engineering
Data Engineer
Hadoop
Informatica
Qlikview
Datapipeline

Key  Responsibilities :

  • Development of proprietary processes and procedures designed to process various data streams around critical databases in the org
  • Manage technical resources around data technologies, including relational databases, NO SQL DBs, business intelligence databases, scripting languages, big data tools and technologies, visualization tools.
  • Creation of a project plan including timelines and critical milestones to success in support of the project
  • Identification of the vital skill sets/staff required to complete the project
  • Identification of crucial sources of the data needed to achieve the objective.

 

Skill Requirement :

  • Experience with data pipeline processes and tools
  • Well versed in the Data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, ETL, ESB)
  • Experience with an existing ETL tool e.g Informatica and Ab initio etc
  • Deep understanding of big data systems like Hadoop, Spark, YARN, Hive, Ranger, Ambari
  • Deep knowledge of Qlik ecosystems like  Qlikview, Qliksense, and Nprinting
  • Python, or a similar programming language
  • Exposure to data science and machine learning
  • Comfort working in a fast-paced environment

Soft attributes :

  • Independence: Must have the ability to work on his/her own without constant direction or supervision. He/she must be self-motivated and possess a strong work ethic to strive to put forth extra effort continually
  • Creativity: Must be able to generate imaginative, innovative solutions that meet the needs of the organization. You must be a strategic thinker/solution seller and should be able to think of integrated solutions (with field force apps, customer apps, CCT solutions etc.). Hence, it would be best to approach each unique situation/challenge in different ways using the same tools.
  • Resilience: Must remain effective in high-pressure situations, using both positive and negative outcomes as an incentive to move forward toward fulfilling commitments to achieving personal and team goals.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About ACT FIBERNET

Founded :
2008
Type
Size :
100-1000
Stage :
Profitable
About

ACT (Atria Convergence Technologies Pvt Ltd.) is one of the country’s most renowned triple play service providers with close to 1.5 million happy customers. We are on the threshold of being a 1000 crore company with a strong team of more than 6,500 employees across markets. Our customers currently experience the following state-of-the-art services under the ACT brand: • Fibernet (Internet over Fiber Optics) • HD TV • Digital TV Headquartered in Bangalore, ACT is spread across the length and breadth of Karnataka, Andhra Pradesh and Tamil Nadu. Pioneers in Fiber-To-The-Home technology,

 

ACT Fibernet is currently the largest non-telco and the fastest growing Internet Service Provider in the country. ACT is funded by IVFA (India Value Fund Advisors), a private equity investment fund responsible for building giants like Biocon, Radio City, HDFC Bank, just to name a few. The leadership team of ACT comprises 30+ professionals hailing from diverse industries like FMCG, Entertainment, Information & Technology, Telecom, and Retail.

Read more
Company video
ACT FIBERNET's video section
ACT FIBERNET's video section
Connect with the team
Profile picture
Adarsh Shetty
Profile picture
Sumit Sindhwani
Company social profiles
bloginstagramtwitterfacebook

Similar jobs

client of Merito
Mumbai
3 - 8 yrs
Best in industry
Python
SQL
Tableau
PowerBI
PHP
+2 more

Our client is the world’s largest media investment company and are a part of WPP. In fact, they are responsible for one in every three ads you see globally. We are currently looking for a Senior Software Engineer to join us. In this role, you will be responsible for coding/implementing of custom marketing applications that Tech COE builds for its customer and managing a small team of developers.

 

What your day job looks like:

  • Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics
  • Develop data extraction and manipulation code based on business rules
  • Develop automated and manual test cases for the code written
  • Design and construct data store and procedures for their maintenance
  • Perform data extract, transform, and load activities from several data sources.
  • Develop and maintain strong relationships with stakeholders
  • Write high quality code as per prescribed standards.
  • Participate in internal projects as required

 
Minimum qualifications:

  • B. Tech./MCA or equivalent preferred
  • Excellent 3 years Hand on experience on Big data, ETL Development, Data Processing.


    What you’ll bring:

  • Strong experience in working with Snowflake, SQL, PHP/Python.
  • Strong Experience in writing complex SQLs
  • Good Communication skills
  • Good experience of working with any BI tool like Tableau, Power BI.
  • Sqoop, Spark, EMR, Hadoop/Hive are good to have.

 

 

Read more
Picture the future
Agency job
via Jobdost by Sathish Kumar
Hyderabad
4 - 7 yrs
₹5L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.


Mandatory Requirements 

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark

 

Read more
Chennai
5 - 14 yrs
₹13L - ₹21L / yr
Python
Java
PySpark
Javascript
Hadoop

Python + Data scientist : 
• Hands-on and sound knowledge of Python, Pyspark, Java script

• Build data-driven models to understand the characteristics of engineering systems

• Train, tune, validate, and monitor predictive models

• Sound knowledge on Statistics

• Experience in developing data processing tasks using PySpark such as reading,

merging, enrichment, loading of data from external systems to target data destinations

• Working knowledge on Big Data or/and Hadoop environments

• Experience creating CI/CD Pipelines using Jenkins or like tools

• Practiced in eXtreme Programming (XP) disciplines 

Read more
Simplifai Cognitive Solutions Pvt Ltd
Priyanka Malani
Posted by Priyanka Malani
Pune
2 - 15 yrs
₹10L - ₹30L / yr
Spark
Big Data
Apache Spark
Python
PySpark
+1 more

We are looking for a skilled Senior/Lead Bigdata Engineer to join our team. The role is part of the research and development team, where you with enthusiasm and knowledge are going to be our technical evangelist for the development of our inspection technology and products.

 

At Elop we are developing product lines for sustainable infrastructure management using our own patented technology for ultrasound scanners and combine this with other sources to see holistic overview of the concrete structure. At Elop we will provide you with world-class colleagues highly motivated to position the company as an international standard of structural health monitoring. With the right character you will be professionally challenged and developed.

This position requires travel to Norway.

 

Elop is sister company of Simplifai and co-located together in all geographic locations.

https://elop.no/

https://www.simplifai.ai/en/


Roles and Responsibilities

  • Define technical scope and objectives through research and participation in requirements gathering and definition of processes
  • Ingest and Process data from data sources (Elop Scanner) in raw format into Big Data ecosystem
  • Realtime data feed processing using Big Data ecosystem
  • Design, review, implement and optimize data transformation processes in Big Data ecosystem
  • Test and prototype new data integration/processing tools, techniques and methodologies
  • Conversion of MATLAB code into Python/C/C++.
  • Participate in overall test planning for the application integrations, functional areas and projects.
  • Work with cross functional teams in an Agile/Scrum environment to ensure a quality product is delivered.

Desired Candidate Profile

  • Bachelor's degree in Statistics, Computer or equivalent
  • 7+ years of experience in Big Data ecosystem, especially Spark, Kafka, Hadoop, HBase.
  • 7+ years of hands-on experience in Python/Scala is a must.
  • Experience in architecting the big data application is needed.
  • Excellent analytical and problem solving skills
  • Strong understanding of data analytics and data visualization, and must be able to help development team with visualization of data.
  • Experience with signal processing is plus.
  • Experience in working on client server architecture is plus.
  • Knowledge about database technologies like RDBMS, Graph DB, Document DB, Apache Cassandra, OpenTSDB
  • Good communication skills, written and oral, in English

We can Offer

  • An everyday life with exciting and challenging tasks with the development of socially beneficial solutions
  • Be a part of companys research and Development team to create unique and innovative products
  • Colleagues with world-class expertise, and an organization that has ambitions and is highly motivated to position the company as an international player in maintenance support and monitoring of critical infrastructure!
  • Good working environment with skilled and committed colleagues an organization with short decision paths.
  • Professional challenges and development
Read more
Gurugram, Pune, Bengaluru (Bangalore), Delhi, Noida, Ghaziabad, Faridabad
2 - 9 yrs
₹8L - ₹20L / yr
Python
Hadoop
Big Data
Spark
Data engineering
+3 more

Key Responsibilities : ( Data Developer Python, Spark)

Exp : 2 to 9 Yrs 

Development of data platforms, integration frameworks, processes, and code.

Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages

Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.

Elaborate stories in a collaborative agile environment (SCRUM or Kanban)

Familiarity with cloud platforms like GCP, AWS or Azure.

Experience with large data volumes.

Familiarity with writing rest-based services.

Experience with distributed processing and systems

Experience with Hadoop / Spark toolsets

Experience with relational database management systems (RDBMS)

Experience with Data Flow development

Knowledge of Agile and associated development techniques including:

Read more
GitHub
at GitHub
4 recruiters
Nataliia Mediana
Posted by Nataliia Mediana
Remote only
3 - 15 yrs
$50K - $80K / yr
Data Science
Data Scientist
Data engineering
Financial analysis
Finance
+8 more

We are a nascent quantitative hedge fund led by an MIT PhD and Math Olympiad medallist, offering opportunities to grow with us as we build out the team. Our fund has  world class investors and big data experts as part of the GP,  top-notch ML experts as advisers to the fund, plus has equity funding to grow the team, license data and scale the data processing.

We are interested in researching and taking in live a variety of quantitative strategies based on historic and live market data, alternative datasets, social media data (both audio and video) and stock fundamental data.

You would join, and, if qualified, lead a growing team of data scientists and researchers, and be responsible for a complete lifecycle of quantitative strategy implementation and trading.

Requirements:

  • Atleast 3 years of relevant ML experience
  • Graduation date : 2018 and earlier
  •   3-5 years of experience in high level Python programming.
  • Master Degree (or Phd) in quantitative disciplines such as Statistics, Mathematics, Physics, Computer Science in top universities.
  •   Good knowledge of applied and theoretical statistics, linear algebra and machine learning techniques. 
  •   Ability to leverage financial and statistical insights to research, explore and harness a large collection of quantitative strategies and financial datasets in order to build strong predictive models.
  • Should take ownership for the research, design, development and implementation of the strategy development and effectively communicate with other team mates
  •   Prior experience and good knowledge of lifecycle and pitfalls of algorithmic strategy development and modelling. 
  •   Good practical knowledge in understanding financial statements, value investing, portfolio and risk management techniques.
  •   A proven ability to lead and drive innovation to solve challenges and road blocks in project completion.
  • A valid Github profile with some activity in it

Bonus to have:

  •   Experience in storing and retrieving data from large and complex time series databases
  •   Very good practical knowledge on time-series modelling and forecasting (ARIMA, ARCH and Stochastic modelling)
  •   Prior experience in optimizing and back testing quantitative strategies, doing return and risk attribution, feature/factor evaluation. 
  •   Knowledge of AWS/Cloud ecosystem is an added plus (EC2s, Lambda, EKS, Sagemaker etc.) 
  •   Knowledge of REST APIs and data extracting and cleaning techniques 
  •   Good to have experience in Pyspark or any other big data programming/parallel computing
  •   Familiarity with derivatives, knowledge in multiple asset classes along with Equities.
  •   Any progress towards CFA or FRM is a bonus
  • Average tenure of atleast 1.5 years in a company
Read more
MNC
at MNC
Agency job
via Fragma Data Systems by geeti gaurav mohanty
Bengaluru (Bangalore)
3 - 5 yrs
₹6L - ₹12L / yr
Spark
Big Data
Data engineering
Hadoop
Apache Kafka
+5 more
Data Engineer

• Drive the data engineering implementation
• Strong experience in building data pipelines
• AWS stack experience is must
• Deliver Conceptual, Logical and Physical data models for the implementation
teams.

• SQL stronghold is must. Advanced SQL working knowledge and experience
working with a variety of relational databases, SQL query authoring
• AWS Cloud data pipeline experience is must. Data pipelines and data centric
applications using distributed storage platforms like S3 and distributed processing
platforms like Spark, Airflow, Kafka
• Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda,
Elasticsearch
• Ability to use a major programming (e.g. Python /Java) to process data for
modelling.
Read more
Service based company
Remote only
3 - 8 yrs
₹8L - ₹13L / yr
pandas
PySpark
Big Data
Data engineering
Performance optimixation
+3 more
Data pre-processing, data transformation, data analysis, and feature engineering, 
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
Read more
Paisabazaar.com
at Paisabazaar.com
3 recruiters
Amit Gupta
Posted by Amit Gupta
NCR (Delhi | Gurgaon | Noida)
1 - 5 yrs
₹6L - ₹18L / yr
Spark
MapReduce
Hadoop
ETL
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques
Read more
Data ToBiz
at Data ToBiz
2 recruiters
PS Dhillon
Posted by PS Dhillon
Chandigarh, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹7L - ₹15L / yr
Datawarehousing
Amazon Redshift
Analytics
Python
Amazon Web Services (AWS)
+2 more
Job Responsibilities :  
As a Data Warehouse Engineer in our team, you should have a proven ability to deliver high-quality work on time and with minimal supervision.
Develops or modifies procedures to solve complex database design problems, including performance, scalability, security and integration issues for various clients (on-site and off-site).
Design, develop, test, and support the data warehouse solution.
Adapt best practices and industry standards, ensuring top quality deliverable''s and playing an integral role in cross-functional system integration.
Design and implement formal data warehouse testing strategies and plans including unit testing, functional testing, integration testing, performance testing, and validation testing.
Evaluate all existing hardware's and software's according to required standards and ability to configure the hardware clusters as per the scale of data.
Data integration using enterprise development tool-sets (e.g. ETL, MDM, Quality, CDC, Data Masking, Quality).
Maintain and develop all logical and physical data models for enterprise data warehouse (EDW).
Contributes to the long-term vision of the enterprise data warehouse (EDW) by delivering Agile solutions.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.  
Participate in data warehouse health monitoring and performance optimizations as well as quality documentation.

Job Requirements :  
2+ years experience working in software development & data warehouse development for enterprise analytics.
2+ years of working with Python with major experience in Red-shift as a must and exposure to other warehousing tools.
Deep expertise in data warehousing, dimensional modeling and the ability to bring best practices with regard to data management, ETL, API integrations, and data governance.
Experience working with data retrieval and manipulation tools for various data sources like Relational (MySQL, PostgreSQL, Oracle), Cloud-based storage.
Experience with analytic and reporting tools (Tableau, Power BI, SSRS, SSAS). Experience in AWS cloud stack (S3, Glue, Red-shift, Lake Formation).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business clients.
Knowledge of Logistics and/or Transportation Domain is a plus.
Ability to handle/ingest very huge data sets (both real-time data and batched data) in an efficient manner.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos