Responsibilities:
- Must be able to write quality code and build secure, highly available systems.
- Assemble large, complex datasets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing datadelivery, re-designing infrastructure for greater scalability, etc with the guidance.
- Create datatools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Monitoring performance and advising any necessary infrastructure changes.
- Defining dataretention policies.
- Implementing the ETL process and optimal data pipeline architecture
- Build analytics tools that utilize the datapipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Create design documents that describe the functionality, capacity, architecture, and process.
- Develop, test, and implement datasolutions based on finalized design documents.
- Work with dataand analytics experts to strive for greater functionality in our data
- Proactively identify potential production issues and recommend and implement solutions
Skillsets:
- Good understanding of optimal extraction, transformation, and loading of datafrom a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Proficient understanding of distributed computing principles
- Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
- Implemented complex projects dealing with the considerable datasize (PB).
- Optimization techniques (performance, scalability, monitoring, etc.)
- Experience with integration of datafrom multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Good understanding of Lambda Architecture, along with its advantages and drawbacks
- Creation of DAGs for dataengineering
- Expert at Python /Scala programming, especially for dataengineering/ ETL purposes
About Ganit Business Solutions
About
Ganit Inc. is in the business of enhancing the Decision Making Power (DMP) of businesses by offering solutions that lie at the crossroads of discovery-based artificial intelligence, hypothesis-based analytics, and the Internet of Things (IoT).
The company's offerings consist of a functioning product suite and a bespoke service offering as its solutions. The goal is to integrate these solutions into the core of their client's decision-making processes as seamlessly as possible. Customers in the FMCG/CPG, Retail, Logistics, Hospitality, Media, Insurance, and Banking sectors are served by Ganit's offices in both India and the United States. The company views data as a strategic resource that may assist other businesses in achieving growth in both their top and bottom lines of business. We build and implement AI and ML solutions that are purpose-built for certain sectors to increase decision velocity and decrease decision risk.
Connect with the team
Company social profiles
Similar jobs
- Experience building applications using NodeJS and frameworks such as Express.
- Thorough understanding of React.js and NodeJS including its core principles.
- Ability to understand business requirements and translate them into technical requirements.
- Familiarity with code versioning tools (such as Git, SVN, and Mercurial).
- Understanding the nature of asynchronous programming and its quirks and workarounds
- Strong experience with MongoDB, Postgres
- Highly proficient with Vue.js framework and its core principles such as components, reactivity, and the virtual DOM
- Familiarity with the Vue.js ecosystem, including Vue CLI, Vuex, Vue Router
- Good understanding of HTML5 and CSS3, and Sass
- Understanding of server-side rendering and its benefits and use cases
About Us
GradRight is an Ed-fintech startup focused on global higher education. We are committed to enabling global youth aspirations by helping students find the "right education" at the "right cost". We aim to assist students in discovering their best fit universities and funding plans.
Our flagship product - FundRight is the world’s first student loan bidding platform. GradRight won the HSBC Fintech Innovation Challenge supported by the Ministry of Electronics & IT, Government of India & was among the top 7 global finalists in The PIEoneer awards hosted by The PIE News, UK. Since September 2020, we have received over US$500 million in loan requests and successfully secured over US$75 million in loan approvals for more than 1600 students.
Brief:
We are pursuing a complex set of problems that involve building for an international audience and for an industry that has largely been service-centric. As an SDE at GradRight, you’ll bring an unmatched customer-centricity to your work, with a focus on building for the long term and large scale.
You’ll build beautiful front-end experiences and enable flexible customer journeys while focusing on an international audience.
Responsibilities:
- Lead design discussions and decisions around building a scalable and modular front-end architecture
- Write clean and modular code
- Participate in sprint ceremonies and actively contribute to scaling the engineering organization from a process perspective
- Stay on top of the software engineering ecosystem and propose new technologies/methodologies that can be adopted
- Contribute to engineering hiring by conducting interviews
- Manage and mentor a small team of junior engineers
Requirements:
- At least 6 years experience as an SDE, with at least 2 years experience leading teams
- Strong experience with vue.js
- Able to write maintainable, scalable and unit-testable code
- Strong understanding of software design principles and patterns
- Excellent command over data structures and algorithms
- Passion for solving complex problems
- Experience with team management
- Excellent written and verbal communication skills
Good to have:
- Experience with GraphQL and micro-frontend architectures
- Worked on products that addressed an international audience
- Worked on products that scaled to millions of users
Where: Hyderabad/ Bengaluru, India (Hybrid Mode 3 Days/Week in Office)
Job Description:
- Collaborate with stakeholders to develop a data strategy that meets enterprise needs and industry requirements.
- Create an inventory of the data necessary to build and implement a data architecture.
- Envision data pipelines and how data will flow through the data landscape.
- Evaluate current data management technologies and what additional tools are needed.
- Determine upgrades and improvements to current data architectures.
- Design, document, build and implement database architectures and applications. Should have hands-on experience in building high scale OLAP systems.
- Build data models for database structures, analytics, and use cases.
- Develop and enforce database development standards with solid DB/ Query optimizations capabilities.
- Integrate new systems and functions like security, performance, scalability, governance, reliability, and data recovery.
- Research new opportunities and create methods to acquire data.
- Develop measures that ensure data accuracy, integrity, and accessibility.
- Continually monitor, refine, and report data management system performance.
Required Qualifications and Skillset:
- Extensive knowledge of Azure, GCP clouds, and DataOps Data Eco-System (super strong in one of the two clouds and satisfactory in the other one)
- Hands-on expertise in systems like Snowflake, Synapse, SQL DW, BigQuery, and Cosmos DB. (Expertise in any 3 is a must)
- Azure Data Factory, Dataiku, Fivetran, Google Cloud Dataflow (Any 2)
- Hands-on experience in working with services/technologies like - Apache Airflow, Cloud Composer, Oozie, Azure Data Factory, and Cloud Data Fusion (Expertise in any 2 is required)
- Well-versed with Data services, integration, ingestion, ELT/ETL, Data Governance, Security, and Meta-driven Development.
- Expertise in RDBMS (relational database management system) – writing complex SQL logic, DB/Query optimization, Data Modelling, and managing high data volume for mission-critical applications.
- Strong grip on programming using Python and PySpark.
- Clear understanding of data best practices prevailing in the industry.
- Preference to candidates having Azure or GCP architect certification. (Either of the two would suffice)
- Strong networking and data security experience.
Awareness of the Following:
- Application development understanding (Full Stack)
- Experience on open-source tools like Kafka, Spark, Splunk, Superset, etc.
- Good understanding of Analytics Platform Landscape that includes AI/ML
- Experience in any Data Visualization tool like PowerBI / Tableau / Qlik /QuickSight etc.
About Us
Gramener is a design-led data science company. We build custom Data & AI solutions that help solve complex business problems with actionable insights and compelling data stories. We partner with enterprise data and digital transformation teams to improve the data-driven decision-making culture across the organization. Our open standard low-code platform, Gramex, rapidly builds engaging Data & AI solutions across multiple business verticals and use cases. Our solutions and technology have been recognized by analysts such as Gartner and Forrester and have won several awards.
We Offer You:
- a chance to try new things & take risks.
- meaningful problems you'll be proud to solve.
- people you will be comfortable working with.
- transparent and innovative work environment.
To know more about us visit Gramener Website and Gramener Blog.
If anyone looking for the same, kindly share below mentioned details.
Total Experience:
Relevant Experience:
Notice Period:
CTCT:
ECTC:
Current Location:
- Support and maintain production Cognos web portal to manage the OLAP cube and folders.
- Used various transformations like expression, union, source qualifier, aggregator, router, store procedure, lookup transformations.
- Work on transformations such as source qualifier, joiner, lookup, rank, expression, aggregator and sequence generator etc.
- Work with relational databases like DB2.
- Develop mappings with XML as target and formatting the target data according to the requirement.
- Work with logical, physical, conceptual, star, snow flake schema data models.
- Extract data from flat files, DB2 and to load the data into the SalesForce database.
- Work with various data sources--relational, flat file (fix width, delimitate) and XML.
- Involve in all phases of SDLC from requirements gathering, architecture design, development, testing, and data migration.
- Involve in all phases of SDLC - requirement gathering, design, development, testing and implementation and post-production support.
- Involve in writing PERL scripts for file transfers, file renaming and few other database scripts to be execute from UNIX.
- Create functional and technical mapping specifications and documentation for QA team.
- Support implementation and execution of MAPREDUCE programs in a cluster environment.
- Maintain data integrity during extraction, manipulation, processing, analysis and storage.
- Migrate repository objects, services and scripts from development environment to production environment.
Almost a decade old, it is a venture committed to bring together a varied range of traditional crafts and techniques of dyeing, weaving, printing and hand embroidery. The founders have dedicated their life to promote Indian Block Prints and provide employment and Hand-Embroidery training to women so that numerous underprivileged women can be empowered.
What you will do:
- Handling the return filing of EPF and ESIC with a consultant
- Filing GST Return (3B and GSTR-1)
- Reconciling of sales data and preparing data for GST return (GSTR-1 & 3B)
- Managing revenue booking and its auditing
- Handling GST audit and TDS return filing
- Processing, reconciling and managing payment/refund
- Coordinating with stakeholders such as various internal departments
- Processing and reconciling invoices
- Payroll processing and its controlling and also ensuring statutory compliance while handling the same
- Handling and managing PCS rate payment
- Book-keeping and maintaining financial records and financial statements related to payroll of the company
- Maintaining up-to-date and accurate records of the financial details related to payroll and reporting the same to the management in regular intervals
Desired Candidate Profile
What you need to have:- Graduation in Commerce is a must
- Relevant work experience of 6-7 years in handling payroll (finance)
- E-commerce, retail or manufacturing background is preferred
- Proficiency in Excel
- Good in communication skills and with an eye to detail
Hiring for Salesforce Commerce Cloud Developer (SFCC) at Appcino Technologies.
- Lead the effort to design, build and configure applications, acting as the primary point of contact.
- On need basis design, build and configure applications to meet up business process and application requirements.
- SFRA, Demandware, site genesis must have worked upon.
- Lead large-scale eCommerce implementations, utilizing Salesforce Commerce Cloud B2C, providing both oversight and hands on contributions to the software design, development, and integration
- Independently lead the estimation effort for a project
- Participate in the development of conceptual and logical architectures
- Design, develop and maintain application architectures that support client’s business requirements
- Resolve integration and interfacing issues between various back-end systems
- Optimize application performance and scalability
Requirements
- Minimum 3 to 7 Years of experience in software industry are required.
- Experience in Salesforce Lightning B2C Commerce
- Experience in Apex, Salesforce Lightning components, Force.com
- Strong E-commerce platform development is preferred
- Good experience at applying clean code best practices and good debugging skills
- Good understanding of Salesforce security model, Communities, and best practices
Data Scientist
Requirements
● B.Tech/Masters in Mathematics, Statistics, Computer Science or another
quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,
Predictive modeling, Clustering, Deep Learning stack, NLP
● Working knowledge of Tensorflow/PyTorch
Optional Add-ons-
● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark
etc.
● Experience with databases: MongoDB
Their services are available across the globe, with over 65% of their client base being from US, UK, and Canada. The company's primary focus is on Ayurveda and taking the ancient knowledge to anyone who wishes to bring back balance to their health and apply the tools in their everyday life.
- Writing compelling content for email campaigns, landing pages, advertisements, live events, webinars and communication with customers
- Improvising current product portfolio leading to better audience engagement and brand positioning
- Researching on topics that are of interest to our target audience and creating pipeline of future launches
- Staying up-to-date with trending advertising strategies
- Creating out-of-the-box campaign ideas
- Proofreading, editing and improving content created by others and ensuring brand consistency
- Working alongside team members to come up with concepts and ideas to increase revenue
- Ensuring correct grammar and spelling throughout content and website
Desired Candidate Profile
What you need to have:
- Researching on consumer interests to come up with relevant topics and ensuring these journalistic articles are backed up by scientific Ayurvedic research, and write them in an easy-to-understand fashion that’s also scientifically accurate.
- Researching the target audience’s interests and needs
- Researching the scientific Ayurvedic understanding of said topics
- Creating highly valuable informational articles for online purposes
- Staying up-to-date with latest trends in USA and EU regarding skincare, health and wellness
- Creating mind-blowing value for our customers
- Establishing the organization as an authoritative figure through accurate and useful informational
- Ensuring compliance with law (e.g. copyright and regulatory bodies) pieces
Mogi I/O (www.mogiio.com), a Delhi based video delivery SaaS venture, is looking for a Full-stack /mean stack developer, both for an internship role.
A candidate should be high performance, high energy. He would work on core video and image tech. Should be a hands-on Full Stack or mean stack coder.
Experience – Any experience in Back end working on NodeJs and MongoDB.
Some core requirements -
- Should be a hands-on Back end- NodeJs and MongoDB, with experience in building real world applications.
- Highly curious, ability to learn and implement new technologies independently
- High ownership and impact-driven, capable of juggling multiple things
- Effective team player, able to mentor and learn from teammates.
Technical Proficiency - MongoDB, Node.js
Location – Work from Office (Saket, Delhi)
Compensation -
10k-12k per month based on interview performance and skill set, opportunity to convert to full time post internship completion.
Pre-Placement Offer(Performance based).
Indium Software is a niche technology solutions company with deep expertise in Digital , QA and Gaming. Indium helps customers in their Digital Transformation journey through a gamut of solutions that enhance business value.
With over 1000+ associates globally, Indium operates through offices in the US, UK and India
Visit http://www.indiumsoftware.com">www.indiumsoftware.com to know more.
Job Title: Analytics Data Engineer
What will you do:
The Data Engineer must be an expert in SQL development further providing support to the Data and Analytics in database design, data flow and analysis activities. The position of the Data Engineer also plays a key role in the development and deployment of innovative big data platforms for advanced analytics and data processing. The Data Engineer defines and builds the data pipelines that will enable faster, better, data-informed decision-making within the business.
We ask:
Extensive Experience with SQL and strong ability to process and analyse complex data
The candidate should also have an ability to design, build, and maintain the business’s ETL pipeline and data warehouse The candidate will also demonstrate expertise in data modelling and query performance tuning on SQL Server
Proficiency with analytics experience, especially funnel analysis, and have worked on analytical tools like Mixpanel, Amplitude, Thoughtspot, Google Analytics, and similar tools.
Should work on tools and frameworks required for building efficient and scalable data pipelines
Excellent at communicating and articulating ideas and an ability to influence others as well as drive towards a better solution continuously.
Experience working in python, Hive queries, spark, pysaprk, sparkSQL, presto
- Relate Metrics to product
- Programmatic Thinking
- Edge cases
- Good Communication
- Product functionality understanding
Perks & Benefits:
A dynamic, creative & intelligent team they will make you love being at work.
Autonomous and hands-on role to make an impact you will be joining at an exciting time of growth!
Flexible work hours and Attractive pay package and perks
An inclusive work environment that lets you work in the way that works best for you!